23 June 2022
An AI algorithm that was supposed to predict the seriousness of patients was found to systematically conclude that white people were worse off than black people. Another AI algorithm used to select job applicants was found to choose men much more often than women. These are just a few recent examples of problems that can arise in the application of AI algorithms, mostly algorithms based on machine learning.
It is such problems that inspired setting up the FACT-AI course, short for ‘Fairness, Accountability, Confidentiality and Transparency in AI’. It is a compulsory full-time, four-week course in the Master’s programme ‘Artificial Intelligence’. The course was organized for the third time in January 2022, for the first time co-ordinated by Fernando Pascoal Dos Santos, assistant professor of the SIAS group (Socially Intelligent Artificial Sytems) at the Informatics Institute of the University of Amsterdam.
‘We hope that the course will make students aware of the societal problems that can arise in the application of AI algorithms and learn how to apply state-of-the-art algorithmic approaches to attenuate those problems’, he says. ‘And ultimately, that the course contributes to making AI algorithms more inclusive for everybody.’
A central component of the course, and the assignment that defines the students’ grades, is an exercise to replicate papers accepted for publication at top machine learning conferences. ‘The student’s replication papers are then submitted to the Machine Learning Reproducibility Challenge’, Pascoal Dos Santos tells. ‘This year, of the 43 accepted papers, 21 were from UvA students, including two Outstanding Paper Awards, and the Best Paper Award’.
Together with his fellow students Piyush Bagad, Jesse Maas and Danilo de Goede, Paul Hilders won this year’s Best Paper Award. Hilders tells about the FACT-AI Course: ‘When you start studying AI, concepts like fairness, accountability and transparency may not be the most exciting topics to think about. But the FACT-AI course fits very well into the curriculum and awakened an interest in me that I had not expected. At the start of the course, students have no experience with doing research. The course throws you right in at the deep end. You are immediately confronted with the difficulties of reproducing other people’s research. I found that very interesting.’
What, according to Hilders, makes the FACT-AI Course so successful at the ML Reproducibility Challenge? Hilders: ‘I think it is the combination of good supervision and high quality of the Master students, coming from all over the world. Each group of students has its own supervisor with a great deal of experience, who is always available for questions and who critically follows the writing of the replication paper.’
Hilders: ‘In addition, as a student you notice how smart your fellow students are and how everyone motivates each other. FACT-AI lets you discover new research directions, and I think it will influence the research directions students choose later. In my case, I am very interested in medical AI, and in this field concepts like fairness and transparency are very important.’
UvA/IvI-course ‘Fairness, Accountability, Confidentiality and Transparency in AI’
Machine learning reproducibility challenge
Details about the connection between the course and the Machine learning reproducibility challenge, by Ana Lučić
Best Paper Award
Paper: Reproducibility Study of Counterfactual Generative Networks
Students: Piyush Bagad, Jesse Maas, Paul Hilders, Danilo de Goede
Supervisor: Christos Athanasiadis
Outstanding Paper Awards
Paper: ‘Strategic classification made practical: reproduction’
Students: Guilly Kolkman, Maks Kulicki, Jan Athmer, Alex Labro
Supervisor: Ilse van der Linden
Paper: ‘On the reproducibility of Exacerbating Algorithmic Bias through Fairness Attacks’
Students: Andrea Lombardo, Matteo Tafuro, Tin Hadži Veljković, Lasse Becker-Czarnetzki
Supervisor: Sara Altamirano