Gecontroleerde (oud-)studenten met uitwonendenbeurs krijgen geld terug

Het kabinet betaalt boetes en teruggevorderde studiefinanciering terug aan (oud-)studenten die de uitwonendenbeurs ontvingen. Dat doet het kabinet omdat in het selectieproces van de controles op de uitwonendenbeurs sprake was van indirecte discriminatie. Het bewijs dat bij deze controles is verkregen om te besluiten of iemand wel of niet uitwonend was, had niet gebruikt mogen worden. Dat maakt de besluiten onrechtmatig en deze worden daarom teruggedraaid, schrijft minister Bruins (Onderwijs, Cultuur en Wetenschap) aan de Tweede Kamer. Hij reserveert € 61 miljoen om de zaak recht te zetten.

From Dienst Uitvoering Onderwijs (DUO) on November 11, 2024

Dutch government has to pay back 61 million euros to students who were discriminated against through DUO’s fraud profiling operation

We’ve written twice before about the racist impact of DUO’s student fraud detection efforts. The Dutch government has now decided to pay back all the fines and the study financing they held back for all students that were checked between 2012 and 2023.

Continue reading “Dutch government has to pay back 61 million euros to students who were discriminated against through DUO’s fraud profiling operation”

Racist Technology in Action: Anti-money laundering efforts by Dutch banks disproportionately affect people with a non-Western migration background

Banks have a requirement to ‘know their customers’ and to look for money laundering and the financing of terrorism. Their vigilante efforts lead to racist outcomes.

Continue reading “Racist Technology in Action: Anti-money laundering efforts by Dutch banks disproportionately affect people with a non-Western migration background”

Na het toeslagen schandaal, waarbij onder andere veel eenoudergezinnen en gezinnen met een migratieachtergrond onterecht van fraude werden beschuldigd, werd pijnlijk duidelijk dat niet alleen mensen discrimineren, maar algoritmes ook

Er werd beloofd dat deze systemen eerlijker zouden worden, maar uit het nieuwe jaarverslag van de Autoriteit Persoonsgegevens blijkt dat er sindsdien weinig is verbeterd. Algoritmes categoriseren mensen met bepaalde kenmerken nog steeds onterecht als risico. Noëlle Cecilia, medeoprichter van Brush AI (@ai.brush) was zondag te gast bij Mandy. Zij maakt algoritmes voor bedrijven en deed een jaar lang onderzoek naar de eerlijkheid en discriminatie ervan. Zij legt ons uit waarom de mindset moet veranderen bij het ontwikkelen van AI-systemen.

By Noëlle Cecilia for Instagram on July 9, 2024

How and why algorithms discriminate

Automated decision-making systems contain hidden discriminatory prejudices. We’ll explain the causes, possible consequences, and the reasons why existing laws do not provide sufficient protection against algorithmic discrimination.

By Pie Sombetzki for AlgorithmWatch on June 26, 2024

Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts

In 2018, Lauren Rhue showed that two leading emotion detection software products had a racial bias against Black Men: Face++ thought they were more angry, and Microsoft AI thought they were more contemptuous.

Continue reading “Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts”

Racist Technology in Action: MyLife.com and discriminatory predation

MyLife.com is one of those immoral American companies that collect personal information to sell onwards as profiles on the one hand, while at the same suggesting to the people that are being profiled that incriminating information about them exists online that they can get removed by buying a subscription (that then does nothing and auto-renews in perpetuity).

Continue reading “Racist Technology in Action: MyLife.com and discriminatory predation”

Students with a non-European migration background had a 3.0 times higher chance of receiving an unfounded home visit from the Dutch student grants fraud department

Last year, Investico revealed how DUO, the Dutch organization for administering student grants, was using a racist algorithm to decide which students would get a home visit to check for fraudulent behaviour. The Minister of Education immediately stopped the use of the algorithm.

Continue reading “Students with a non-European migration background had a 3.0 times higher chance of receiving an unfounded home visit from the Dutch student grants fraud department”

Dutch Ministry of Foreign Affairs dislikes the conclusions of a solid report that marks their visa process as discriminatory so buys a shoddy report saying the opposite

For more than a year now, the Dutch Ministry of Foreign Affairs has ignored advice from its experts and continued its use of discriminatory risk profiling of visa applicants.

Continue reading “Dutch Ministry of Foreign Affairs dislikes the conclusions of a solid report that marks their visa process as discriminatory so buys a shoddy report saying the opposite”

Vervolgonderzoek bevestigt indirecte discriminatie controles uitwonendenbeurs

DUO heeft de onafhankelijke stichting Algorithm Audit vervolgonderzoek laten doen naar de manier waarop DUO tussen 2012 en 2023 controleerde of een student terecht studiefinanciering ontving voor uitwonende studenten of niet. De conclusies van het vervolgonderzoek bevestigen dat studenten met een migratieachtergrond hierbij indirect zijn gediscrimineerd.

From Dienst Uitvoering Onderwijs (DUO) on May 21, 2024

Fouten herstellen we later wel: hoe de gemeente een dubieus algoritme losliet op Rotterdammers

Het was te mooi om waar te zijn: een algoritme om fraude in de bijstand op te sporen. Ondanks waarschuwingen bleef de gemeente Rotterdam er bijna vier jaar lang in geloven. Een handjevol ambtenaren, zich onvoldoende bewust van ethische risico’s, kon jarenlang ongestoord experimenteren met de data van kwetsbare mensen.

By Romy van Dijk and Saskia Klaassen for Vers Beton on October 23, 2023

Racist Technology in Action: The UK Home Office’s Sorting Algorithm and the Racist Violence of Borders

In 2020, two NGOs finally forced the UK Home Office’s hand, compelling it to abandon its secretive and racist algorithm for sorting visitor visa applications. Foxglove and The Joint Council for the Welfare of Immigrants (JCWI) had been battling the algorithm for years, arguing that it is a form of institutionalized racism and calling it “speedy boarding for white people.”

Continue reading “Racist Technology in Action: The UK Home Office’s Sorting Algorithm and the Racist Violence of Borders”

Borders and Bytes

So-called “smart” borders are just more sophisticated sites of racialized surveillance and violence. We need abolitionist tools to counter them.

By Ruha Benjamin for Inquest on February 13, 2024

Racist Technology in Action: Slower internet service for the same price in U.S. lower income areas with fewer White residents

Investigative reporting by The Markup showed how U.S. internet providers offer wildly different internet speeds for the same monthly fee. The neighbourhoods with the worst deals had lower median incomes and were very often the least White.

Continue reading “Racist Technology in Action: Slower internet service for the same price in U.S. lower income areas with fewer White residents”

Belastingdienst blijft wet overtreden met mogelijk discriminerende fraude-algoritmen

Na het toeslagenschandaal kreeg de Belastingdienst het advies om drie mogelijk discriminerende fraude-algoritmen onmiddellijk stop te zetten. Toch besloot de fiscus ermee door te gaan: het organisatiebelang woog zwaarder dan naleving van de wet en bescherming van grondrechten. Dat blijkt uit documenten die twee jaar nadat om openbaring was verzocht aan Follow the Money zijn vrijgegeven. ‘Onbegrijpelijk en verbijsterend.’

By David Davidson and Sebastiaan Brommersma for Follow the Money on December 14, 2023

Not a solution: Meta’s new AI system to contain discriminatory ads

Meta has deployed a new AI system on Facebook and Instagram to fix its algorithmic bias problem for housing ads in the US. But it’s probably more band-aid than AI fairness solution. Gaps in Meta’s compliance report make it difficult to verify if the system is working as intended, which may preview what’s to come from Big Tech compliance reporting in the EU.

By John Albert for AlgorithmWatch on November 17, 2023

AI is nog lang geen wondermiddel – zeker niet in het ziekenhuis

Tumoren ontdekken, nieuwe medicijnen ontwikkelen – beloftes genoeg over wat kunstmatige intelligentie kan betekenen voor de medische wereld. Maar voordat je zulk belangrijk werk kunt overlaten aan technologie, moet je precies snappen hoe die werkt. En zover zijn we nog lang niet.

By Maurits Martijn for De Correspondent on November 6, 2023

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑