Racist Technology in Action: The algorithm that was supposed to match asylum seekers to places with jobs doesn’t work and is discriminatory

For many years and for many people, GeoMatch by the Immigration Policy Lab was a shining example of ‘AI for Good’: instead of using algorithms to find criminals or fraud, why don’t we use it to allocate asylum seekers to regions that give them the most job opportunities? Only the naive can be surprised that this didn’t work out as promised.

The COA, the Dutch Central Agency for the Reception of Asylum Seekers, has been experimenting with the algorithm since 2024. Follow the Money has now published about research based on their Freedom of Information Access (FOIA) requests that shows that the algorithmic system often leads to a lose/lose situation for the asylum seeker and the municipality, and that it has “a disproportionate risk of discrimination based on ethnicity, gender, or marital status.”

Moreover, the system takes the state’s perspective rather than that of the asylum seeker. Instead of finding the best spot for each asylum seeker, it finds the best asylum seekers for each spot.

As Follow the Money summarises:

An algorithm with an objective that at first glance seems innocent or positively formulated doesn’t necessarily have to turn out positively. Moreover, it’s exemplary of the sacred belief in big data: experiments are pushed through despite warnings, without convincing evidence that data will make a big difference.

Truth.

See: Big data belooft statushouder betere baankans, maar tegenover­gestelde blijkt at Follow the Money.

Image cut from the original article.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑