For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.

The AIM Lab opened on 31 January 2020. An update by Assistant Professor Xiantong Zhen at the AIM Lab. AIM Lab is a collaborative initiative of the Inception Institute of Artificial Intelligence from the United Arab Emirates and the University of Amsterdam.


The focus

The research lab is focused on medical image analysis by machine learning, covering active scientific topics of broad interest, including both methods and applications. These topics range from low-level vision and data pre-processing tasks to high-level image/video analysis tasks. From a technical perspective, researchers at the AIM Lab will be working on fundamental and relatively general deep-learning models and algorithms, which will be applied to specific diseases, including but not limited to Alzheimer’s disease, cancer and cardiovascular diseases.
In particular, the AIM Lab will be focusing on the following seven projects in its first five years:
• Learning with limited data and its applications to medical image analysis
• Multi-task learning for medical image analysis and data mining
• Continual learning with its applications to medical image classification
• Out-of-distribution generalization for medical image analysis
• Jointly learning from medical images and health records
• Automated report generation from radiology images.

Electronic Health Records

As an example of a recent research development, disease classification relying solely on imaging data is attracting great interest in the field of medical image analysis. Current models could be further improved, however, by also employing Electronic Health Records (EHRs), which contain rich information on patients and findings from clinicians. It is challenging to incorporate this information into disease classification due to the high reliance on clinician input in EHRs, limiting the possibility for automated diagnosis. AIM researchers proposed variational knowledge distillation, which is a new probabilistic inference framework for disease classification based on X-rays that leverages knowledge from EHRs. Specifically, they introduced a conditional latent variable model, where they infer the latent representation of the X-ray image with the variational posterior conditioning on the associated EHR text. By doing so, the model acquires the ability to extract the visual features relevant to the disease during learning and can therefore perform more accurate classification for unseen patients at inference based solely on their X-ray scans.


Opening AIM Lab on 31 January 2020 with Xiantong Zhen, Marel Worring, Geert ten Dam, Victor Everhardt, Cees Snoek
Opening AIM Lab on 31 January 2020 with Xiantong Zhen, Marel Worring, Geert ten Dam, Victor Everhardt, Cees Snoek

Learning from medical data

As a second example, automating the report generation of medical images by using deep learning promises to alleviate workload and assist diagnosis in clinical practice. However, learning from medical data is challenging due to the diversity and uncertainty inherent in the reports written by radiologists with discrepant expertise and experience. To tackle this issue, we present a probabilistic latent variable model with solid theoretical foundation in Bayesian inference. Formulated as variational topic inference, the model uses a set of topics as latent variables that guide the sentence generation by aligning the visual and linguistic modalities in the latent space. In particular, the latent topics act as high-level patterns inferred from training data that help in addressing the common issues of generic and repetitive sentences in text generation. Experiments on chest X-ray image – report datasets, such as Indiana U. Chest X-Rays and MIMIC-CXR, show that the model can produce reports that are not mere copies of reports used during training, while still achieving comparable performance to state-of-the-art in terms of the common natural language generation criteria.

In addition, the AIM lab team has also been developing fundemental machine learning algorithms to deal with catastropic forgetting, cross domain generalization and learning with limited data, which can be applied to medical problems.


  1. T. van Sonsbeek et al., “Variational Knowledge Distillation for Disease Classification in Chest X-Rays.” To appear in Information Processing for Medical Imaging (IPMI), 202
  2. Najdenkoska et al., “Variational Topic Inference for Chest X-Ray Report Generation.” To appear in International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2021.
  3. Xiao et al., “A bit more bayesian: Domain-invariant learning with uncertainty,” in International Conference on Machine Learning (ICML), 2021.
  4. Derakhshani et al., “Kernel continual learning,” in International Conference on Machine Learning (ICML), 2021.
  5. Du, et al., “Metanorm: Learning to normalize few-shot batches across domains,” in International Conference on Learning Representations, 2021.
  6. Zhen, et al., “Learning to learn variational semantic memory,” in Advances in Neural Information Processing (NeurIPS), 2020.