Responsible and Safe AI
The PhD School at the Faculty of Engineering at University of Southern Denmark
Prerequisites
Basic understanding of machine learning, linear algebra, and programming (preferably in Python). Familiarity with deep learning frameworks such as PyTorch is recommended but not required.
Course Description
As Artificial Intelligence (AI) has become an integral part of our daily routines, it has brought many concerns regarding privacy, fairness, robustness, and accountability. Consequently, developing Responsible and Safe AI is now essential in modern AI research. To support this goal, this PhD course introduces advanced methods, specifically Federated Learning (FL), Self-Supervised Learning (SSL), Machine Unlearning (MU) that serve as building blocks of Responsible and Safe AI. These methods address key challenges in data privacy, model adaptability, and learning from limited and unlabelled data.
Participants will explore how FL enables collaborative model training without sharing raw data, how SSL promotes label-efficient and bias-resilient representation learning and how MU ensures compliance with data deletion and privacy regulations. The course combines theoretical understanding with hands-on implementation in Python, preparing students to design, train, and evaluate AI systems that meet ethical, legal, and safety requirements. Applications will be discussed in various domains (e.g., computer vision and healthcare) where responsible AI practices are critical.
Course Content:
- Foundations of Responsible and Safe AI
- Principles of fairness, accountability, transparency, and robustness.
- Overview of AI governance frameworks and regulations (e.g., EU AI Act).
- Federated Learning (FL): Privacy-Preserving Collaborative Training
- Concepts, architectures (horizontal, vertical, cross-silo), and algorithms (FedAvg, FedProx, etc.).
- Addressing challenges such as data heterogeneity, personalization, and communication efficiency.
- Secure aggregation, differential privacy, and adversarial robustness in FL.
- Self-Supervised Learning (SSL): Representation Learning without Labels
- Contrastive learning, and masked modeling.
- Learning generalizable and domain-invariant features to mitigate label bias.
- Applications of SSL in healthcare and other data-scarce domains.
- Machine Unlearning (MU): Data Deletion and Compliance
- Motivation and necessity for unlearning under GDPR’s “Right to be Forgotten”.
- Retraining-based, optimization-based, and approximate unlearning methods.
- Verification and evaluation of unlearning effectiveness and model safety.
- Integrating Responsible Learning Paradigms
- Combining FL, SSL, and MU in unified pipelines (e.g., Federated SSL and Federated Unlearning).
- Trade-offs between performance, privacy, and fairness.
- Case Studies, Hands-on Sessions, and Project Work
- Implementation of FL and SSL using open-source frameworks (PyTorch, Pysyft, and Flower etc.).
- Case studies on responsible and safe AI applications in medical imaging and sensitive data environments.
- Independent or group project on designing a responsible and safe AI solution.
Publicly available open-source datasets will be utilized in tutorials to provide students with hands-on experience.
Learning Outcomes:
Knowledge
- Understand the foundational principles of Responsible and Safe AI, including privacy, fairness, robustness, and accountability.
- Understand how FL, SSL, and MU contribute to ethical and regulation-compliant AI systems.
Skills
- Able to implement FL, SSL, and MU techniques for privacy-preserving training, representation learning, and data influence removal.
- Able to integrate these methods to develop AI models that meet legal and ethical requirements.
Competences
- Can identify scenarios requiring Responsible and Safe AI practices and select suitable methods to address them.
- Able to evaluate AI systems for compliance with privacy regulations and alignment with ethical standards.
Teaching Method (Instruction):
Lectures (30%) Python-based tutorials (20%) Project work (50%). A certain level of study activity during assignment work as well as preparation for and follow-up on lectures are expected. Students will have access to the university’s e-learning platform.
Examination Conditions
There are no such examination conditions. However, students are strongly encouraged to participate in class activities to enhance their understanding of the concepts.
Evaluation
Successful completion of the course requires carrying out a small research-oriented project, presented as a report followed by a short oral presentation.
Assessment – 7-point grading scale
Second examiner – none (i.e. assessed by course instructor)
Course Type and Target Group
PhD-level course intended for students in computer science, data science, artificial intelligence, machine learning, mathematics, statistics, computational biology, bioinformatics, healthcare informatics, medical imaging, robotics, and related disciplines, as well as anyone interested in developing responsible, ethical, and safe AI systems.
Workload and Schedule
- Lectures: 24 hours
- Tutorials: 16 hours
- Project work and self-study: 85 hours
- Total workload: approx. 125 hours
Offered In
The course will be offered both online and offline mode. The place of conduction of course in offline mode is The Maersk Mc-Kinney Moller Institute, SDU, Odense.
Period* Time of year and or dates
The course will be offered every Spring Semester (January-May) of the year.
Note: Students will gain hands-on experience by applying the concepts learned in class to publicly available open-source datasets.
Suggested Literature / References
- EU AI Act, Official Regulation on Artificial Intelligence, 2024.
- The right to erasure, Articles 17 & 19 of the GDPR, 2017.
- Kairouz, P. et al. Advances and Open Problems in Federated Learning, Foundations and Trends® in Machine Learning, 2021.
- Jie G et al. A Survey on Self-Supervised Learning: Algorithms, Applications, and Future Trends. IEEE Trans. Pattern Anal. Machine Intelligence, 46 (12), 9052–9071, 2024.
- H. Xu et. al., Machine unlearning: A survey, ACM Computing Surveys, 56(1), 2023.
Price/Course fee
Free for Danish PhD students (direct costs only); DKK 1200 per ECTS for external participants