PhD Courses in Denmark

Advanced tools of Artificial Intelligence for explainable and reliable systems

The PhD School at the Faculty of Engineering at University of Southern Denmark

Prerequisites

Basic understanding of machine learning concepts and Programming skills, preferably in Python. Prior experience with machine learning libraries such as scikit-learn, TensorFlow, or PyTorch is recommended but not required.


Course Description

The recent tangible introduction of Artificial Intelligence (AI) models to everyday consumer-facing goods, services, and tools raises a number of new technological hurdles and ethical concerns. Among these is the fact that the majority of these models are powered by Deep Neural Networks (DNNs), which, although very expressive and helpful, are often complex and opaque. This special course delves into four key areas of modern AI research: Explainable AI (XAI), Uncertainty Quantification (UQ), Continual Learning, and Model Compression, highlighting how each contributes to model transparency, robustness, adaptability, and efficiency. Through a mix of theory, hands-on exercises, and real-world case studies, learners will gain the skills to design AI systems that balance accuracy, interpretability, reliability, and resource efficiency while addressing ethical and societal considerations.
 

Module 1: Towards Reliable AI and Systemic Implications

The module covers the DL workflow from data collection and preprocessing to model training and evaluation. The conceptual balance between accuracy and interpretability, performance versus computational cost, and reliability and trust, alongside ethical, legal, and societal considerations such as transparency and accountability.

Module 2: Towards Reliable AI and Systemic Implications

This module uncovers understanding the role of explainability. Counterfactual explanations, interpretable models vs. black-box models, tools and frameworks for explainable AI (LIME, SHAP, etc.). The module covers leading XAI tools and libraries, highlighting their practical applications and research advancements. It evaluates model transparency and interpretability using established metrics and experimental studies. 

Module 3: Continual Learning

The challenge of model retraining and data drift, techniques for lifelong and incremental learning, reducing data and compute waste through adaptive learning, and real-world applications of continual learning in dynamic environments.

Module 4: Model compression for energy efficient AI

Measuring and reducing the carbon footprint of AI models, Model compression, pruning, and quantization for efficiency.

Module 5: Uncertainty Quantification in Image Classification

This module Covers aleatoric and epistemic uncertainty, methods like Deep Ensembles and Monte Carlo Dropout, relevant metrics such as Expected Calibration Error (ECE), Brier score, and negative log-likelihood, and techniques to visualize uncertainty in image classification predictions.

Module 6: Case studies and project work

Work on case studies and hands-on experiments to explore applications in sensitive domains like medical imaging or finance. Complete individual or team projects to design trustworthy and efficient AI solutions, highlighting interpretability, reliability, and resource efficiency.


Dataset for the course:
Open access medical imaging dataset will be used to run the modules. Additionally, real world medical image data will preferably be used for this course from our existing running projects.


Delivery Mode
The course will be delivered through an online and offline teaching mode, hands-on lab sessions, and interactive discussions. Participants will have access to project resources, including tutorials, code repositories, and reading materials.


Learning outcomes

Knowledge

  • Understand the principles of explainable AI, continual learning, and energy-efficient AI design
  • Understand the ethical, social, and environmental implications of AI systems and their sustainability impact.

Skills

  • Able to implement and optimize AI models with techniques such as interpretability methods, model compression, pruning, and quantization.
  • Able to design AI workflows that minimize computational and environmental costs while maintaining performance.

Competences

  • Can identify AI problems where sustainable practices can be applied and select appropriate methods to address them.
  • Able to critically evaluate AI systems for transparency, adaptability, and energy efficiency in both research and practical applications.


Teaching method (instruction)

Lecture (30%): short theory capsules.

Experimental/Lab session (20%): Development of models

Teamwork (50%): hands-on lab sessions and practical exercises.


Examination conditions

There are as such nor examination conditions, however, students are encouraged to attend the theory sessions or complete class-activities.


Evaluation

Successful completion of the course will require active participation in lab sessions and a final presentation of the project report. Research publications are encouraged but not mandatory

Assessment - 7-point grading scale

Second examiner – None


Course Type and Target Group

PhD-level course intended for students in computer science, data science, AI, mathematics, healthcare informatics, and related disciplines with an interest in AI.


Workload and Schedule

26 hours of interactive sessions

14 hours of hands-on lab sessions and practical exercises

85 hours dedicated to the project report writing and final presentation


Recommended reading

Textbooks

  1. Christoph Molnar, “Interpretable Machine Learning”. (2022): https://christophm.github.io/interpretable-ml-book/
  2. Chen & Liu, “Lifelong Machine Learning” Morgan & Claypool, August 2018 (1st edition, 2016).
  3. Smith, Ralph C. Uncertainty quantification: theory, implementation, and applications. Society for Industrial and Applied Mathematics, 2024.
  4. Machine Learning Models Compression Techniques: A Book to Learn and Delve Into the 3 Main Compression Techniques Used to Optimize Machine Learning Model Performance.

Online Courses and Tutorial

  1. On Explainable AI: From Theory to Motivation, Industrial Applications, XAI Coding & Engineering Practices - AAAI 2022 Turorial

Papers

  1. Arrieta, Alejandro Barredo, et al. "Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI." Information fusion 58 (2020): 82-115.
  2. Ke, Zixuan, and Bing Liu. "Continual learning of natural language processing tasks: A survey." arXiv preprint arXiv:2211.12701 (2022).
  3. Dantas, Pierre Vilar, Waldir Sabino da Silva Jr, Lucas Carvalho Cordeiro, and Celso Barbosa Carvalho. "A comprehensive review of model compression techniques in machine learning." Applied Intelligence 54, no. 22 (2024): 11804-11844.


Period* Time of year and/or dates:
The details are currently not available. For information on the exact time and place and how to sign up, email to Smith K. Khare, smkh@mmmi.sdu.dk, +45 65 50 37 31

Offered in:
The course will be offered in both online and offline mode. The place of conduction of course in offline mode is The Maersk Mc-Kinney Moller Institute, SDU, Odense.


Price:
Free for Danish PhD students (direct costs only); DKK 1200 per ECTS for external participants