29 June 2021
‘We human beings are limited in that we can only focus on a limited number of things. AI, on the other hand, can examine and compare thousands of things at once. That’s why we need to apply this technology to analyse complex issues such as social justice and climate change,’ says Ghebreab.
To solve those big problems, you have to be prepared to think and act more broadly. ‘I have always been interdisciplinary to some extent: at the medical faculty, in social sciences and, of course, at the Faculty of Science. This has given me the vocabulary to talk to people from different disciplines, and it has given me a broader perspective as well.’
Ghebreab has spent the last decade imparting that emphasis on interdisciplinarity to his students. ‘Sometime around 2010, I decided to stop putting my energy into my own research and to invest it in the next generation instead. Questions I have dealt with in my teaching include: how does the brain process information? Where do we see pattern recognition reflected? And how does AI cope with pattern recognition and bias? How does this work in social networks?
At first, my students really struggled to integrate all these interdisciplinary techniques and perspectives. After a few years, though, they began to make the connection. It was great to see that happen.’
After several years of teaching pattern recognition and bias, Ghebreab wanted to inform the world outside the university about the dark side of AI technology, too. An opportunity to do so presented itself in 2015. Google launched an app that categorised black people as monkeys. Apparently, the algorithm had been ‘trained’ to function in a racist way.
Ghebreab elaborates: ‘The app received a lot of public attention worldwide. How could algorithms possibly discriminate? Machines can’t be racist, can they? I seized that moment to launch the public debate here as well, via a spoken column entitled “What computers can teach us about discrimination”.’
For the past 10 years, Ghebreab has been largely occupied with pointing out the dangers of AI. Those days are over for him. Now, he is focusing more on the potential opportunities of AI. He founded the Civic AI Lab, where he is the scientific director, for that very purpose. The lab is a public-public cooperation between three Amsterdam parties – the municipality, VU Amsterdam and the UvA – and the Ministry of the Interior and Kingdom Relations.
Ghebreab says: ‘My goal is to use the lab to develop AI technology in order to expose the likelihood of inequality in the city on the one hand, while promoting equal opportunities on the other. Within the lab, we have jointly defined a number of research topics we want explore on behalf of the city: education, mobility, health, well- being and environmental factors.’
Within the project on well-being, PhD students are working with the Municipal Health Service to study obesity. Many factors play a role in this wicked problem, including individuals’ home situations and the degree to which they are active. Despite a wealth of available data, it is not clear why obesity is more prevalent in some neighbourhoods than others.
Using AI, the researchers hope to answer questions such as: how does a disparity in obesity rates arise? What is the reason for this uneven distribution? What does that say about obesity and what is the best way to tackle it?
‘In the education project, we use AI to look at how government money is being distributed to students and schools,’ says Ghebreab. ‘The question is: why does inequality only seem to be increasing? What is causing this? Are the funds being distributed in such a way that the money goes where it needs to go?’
Ghebreab is also working on another interesting project within the VU Institute for Societal Resilience. As a guest researcher, he has been working for years to improve the participation processes for newcomers in the Netherlands. Like we said, Ghebreab wants to help people. With the help of AI technology, he tackles migration- related issues such as the placement of refugees in the country of arrival.
‘Currently, the Immigration and Naturalisation Service (IND) and the Central Agency for the Reception of Asylum Seekers (COA) determine placement based on the person’s identity and where there is room – but there is a much better way to do this,’ he says. ‘Using an algorithm developed at Stanford, we can examine how to place each refugee in the location that will be most beneficial to them and the reception centre that hosts them.
With the help of AI, it’s possible to recognise patterns within previous placements of families and individuals. Who ended up where? And was the placement a success or not? These patterns are then used to place an individual where their chances of employment, or education, are greatest. It’s actually a very simple matchmaking system that is already proving effective in the United States and Switzerland.’
This is an incredibly eventful time for the field of AI. In less than two years, the Innovation Centre for AI (ICAI) has grown into a national ecosystem of public-private and public-public AI labs. Last year, nine knowledge institutes in Amsterdam joined forces in the coalition known as AI Technology for People. And at the national level, the NL AI Coalition has established ELSA (ethical, legal, social aspects) labs.
While Ghebreab is pleased with this development, he also sees a downside. ‘Everyone, at all levels, is suddenly staking their own AI claim. But we should also appreciate the scientists who have been working on it for ages. Try to complement each other’s efforts instead of trying to annex one another’s territory. If we don’t work together, it will only delay the development of good AI.’
This is one of the critical comments Ghebreab offers regarding the current focus on Artificial Intelligence. ‘I see institutions, organisations, governments and companies – both inside and outside the university – starting to work with AI themselves. This gives rise to problems such as a discriminating algorithm. I am certain that, if they had done more through co-creation, we would have had fewer problems – like that whole thing with the childcare benefits scandal.’
He is also critical of AI-related education in the Netherlands. ‘In other countries, there is much more emphasis on teaching children about AI, including in secondary schools. By starting this kind of education early, you can make sure that different layers – demographic groups – are exposed to it. It is important that people from all layers of society have a chance to contribute to AI, especially if we want it to be fair.’
Sennay Ghebreab is associate professor of Socially-Intelligent AI, programme director for the Information Studies Master’s programme, and scientific director of the Civic-AI Lab. He obtained his PhD at IvI’s Intelligent Sensory Information Systems group, and returned to the institute in 2020, after spending several years at the UvA’s Psychology Institute and as department head of social sciences at Amsterdam University College. |
Annual review Faculty of Science 2020
This interview was also published in the annual review of the University of Amsterdam Faculty of Science. Read our annual review for news and background on teaching and research at the Faculty of Science in 2020, including interviews with lecturers, researchers and students, facts and figures on enrolment and staff news about organisation developments and our valorisation activities.