Dutch government has to pay back 61 million euros to students who were discriminated against through DUO’s fraud profiling operation

We’ve written twice before about the racist impact of DUO’s student fraud detection efforts. The Dutch government has now decided to pay back all the fines and the study financing they held back for all students that were checked between 2012 and 2023.

Continue reading “Dutch government has to pay back 61 million euros to students who were discriminated against through DUO’s fraud profiling operation”

Beyond Surveillance – The Case Against AI Detection and AI Proctoring

Are you an educator seeking a supportive space to critically examine AI surveillance tools? This workshop is for you. In an era where AI increasingly pervades education, AI detection and proctoring have sparked significant controversy. These tools, categorized as academic surveillance software, algorithmically monitor behaviour and movements. Students are increasingly forced to face them. Together, we will move beyond surveillance toward a culture of trust and transparency, shining a light on the black box of surveillance and discussing our findings. In this two-hour workshop, we will explore AI detection and proctoring through a 40-minute presentation, an hour of activities and discussion, and 20 minutes of group tool evaluation using a rubric.

By Ian Linkletter for BCcampus on September 18, 2024

Students with a non-European migration background had a 3.0 times higher chance of receiving an unfounded home visit from the Dutch student grants fraud department

Last year, Investico revealed how DUO, the Dutch organization for administering student grants, was using a racist algorithm to decide which students would get a home visit to check for fraudulent behaviour. The Minister of Education immediately stopped the use of the algorithm.

Continue reading “Students with a non-European migration background had a 3.0 times higher chance of receiving an unfounded home visit from the Dutch student grants fraud department”

Vervolgonderzoek bevestigt indirecte discriminatie controles uitwonendenbeurs

DUO heeft de onafhankelijke stichting Algorithm Audit vervolgonderzoek laten doen naar de manier waarop DUO tussen 2012 en 2023 controleerde of een student terecht studiefinanciering ontving voor uitwonende studenten of niet. De conclusies van het vervolgonderzoek bevestigen dat studenten met een migratieachtergrond hierbij indirect zijn gediscrimineerd.

From Dienst Uitvoering Onderwijs (DUO) on May 21, 2024

Dutch Higher Education continues to use inequitable proctoring software

In October last year, RTL news showed that Proctorio’s software, used to check if students aren’t cheating during online exams, works less for students of colour. Five months later, RTL asked the twelve Dutch educational institutions on Proctorio’s client list whether they were still using the tool. Eight say they still do.

Continue reading “Dutch Higher Education continues to use inequitable proctoring software”

Late Night Talks: Studenten slepen universiteit voor de rechter vanwege discriminerende AI-software

Vrije Universiteit Amsterdam student Robin Pocornie en Naomi Appelman, co-founder van non-profitorganisatie Racism and Technology Center, gaan met elkaar in gesprek over discriminatie binnen kunstmatige intelligentie (artificial intelligence). Wat zijn de voor- en nadelen van kunstmatige intelligentie en in hoeverre hebben we grip en hoe kunnen we discriminatie tegengaan in de snelle ontwikkelingen van technologie?

By Charisa Chotoe, Naomi Appelman and Robin Pocornie for YouTube on December 3, 2023

Judgement of the Dutch Institute for Human Rights shows how difficult it is to legally prove algorithmic discrimination

On October 17th, the Netherlands Institute for Human Rights ruled that the VU did not discriminate against bioinformatics student Robin Pocornie on the basis of race by using anti-cheating software. However, according to the institute, the VU has discriminated on the grounds of race in how they handled her complaint.

Continue reading “Judgement of the Dutch Institute for Human Rights shows how difficult it is to legally prove algorithmic discrimination”

Waarom we zwarte vrouwen meer zouden moeten geloven dan techbedrijven

Stel je voor dat bedrijven technologie bouwen die fundamenteel racistisch is: het is bekend dat die technologie voor zwarte mensen bijna 30 procent vaker niet werkt dan voor witte mensen. Stel je vervolgens voor dat deze technologie wordt ingezet op een cruciaal gebied van je leven: je werk, onderwijs, gezondheidszorg. En stel je tot slot voor dat je een zwarte vrouw bent en dat de technologie werkt zoals verwacht: niet voor jou. Je dient een klacht in. Om vervolgens van de nationale mensenrechteninstantie te horen dat het in dit geval waarschijnlijk geen racisme was.

By Nani Jansen Reventlow for Volkskrant on October 22, 2023

Judgement of the Dutch Institute for Human Rights shows how difficult it is to legally prove algorithmic discrimination

Today, the Netherlands Institute for Human Rights ruled that the VU did not discriminate against bioinformatics student Robin Pocornie on the basis of race by using anti-cheating software. However, the VU has discriminated on the grounds of race when handling her complaint.

Continue reading “Judgement of the Dutch Institute for Human Rights shows how difficult it is to legally prove algorithmic discrimination”

Uitspraak College voor de Rechten van de Mens laat zien hoe moeilijk het is om algoritmische discriminatie juridisch te bewijzen

Vandaag heeft het College van de Rechten van de Mens geoordeeld dat de VU de student bioinformatica Robin Pocornie niet heeft gediscrimineerd op basis van ras door de inzet van antispieksoftware. Wel heeft de VU verboden onderscheid op grond van ras gemaakt bij de klachtbehandeling.

Continue reading “Uitspraak College voor de Rechten van de Mens laat zien hoe moeilijk het is om algoritmische discriminatie juridisch te bewijzen”

Zwarte mensen vaker niet herkend door antispieksoftware Proctorio

Gezichten van mensen met een zwarte huidskleur worden veel minder goed herkend door tentamensoftware Proctorio, blijkt uit onderzoek van RTL Nieuws. De software, die fraude moet herkennen, zoekt bij online tentamens naar het gezicht van een student. Dat zwarte gezichten beduidend slechter worden herkend, leidt tot discriminatie, zeggen deskundigen die het onderzoek van RTL Nieuws beoordeelden.

By Stan Hulsen for RTL Nieuws on October 7, 2023

Proctoring software uses fudge-factor for dark skinned students to adjust their suspicion score

Respondus, a vendor of online proctoring software, has been granted a patent for their “systems and methods for assessing data collected by automated proctoring.” The patent shows that their example method for calculating a risk score is adjusted on the basis of people’s skin colour.

Continue reading “Proctoring software uses fudge-factor for dark skinned students to adjust their suspicion score”

Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?

In its online series of digital dilemmas, Al Jazeera takes a look at AI in relation to social inequities. Loyal readers of this newsletter will recognise many of the examples they touch on, like how Stable Diffusion exacerbates and amplifies racial and gender disparities or the Dutch childcare benefits scandal.

Continue reading “Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?”

GPT detectors are biased against non-native English writers

The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse.

By Eric Wu, James Zou, Mert Yuksekgonul, Weixin Liang and Yining Mao for arXiv.org on April 18, 2023

Watching the watchers: bias and vulnerability in remote proctoring software

Educators are rapidly switching to remote proctoring and examination software for their testing needs, both due to the COVID-19 pandemic and the expanding virtualization of the education sector. State boards are increasingly utilizing these software for high stakes legal and medical licensing exams. Three key concerns arise with the use of these complex software: exam integrity, exam procedural fairness, and exam-taker security and privacy. We conduct the first technical analysis of each of these concerns through a case study of four primary proctoring suites used in U.S. law school and state attorney licensing exams. We reverse engineer these proctoring suites and find that despite promises of high-security, all their anti-cheating measures can be trivially bypassed and can pose significant user security risks. We evaluate current facial recognition classifiers alongside the classifier used by Examplify, the legal exam proctoring suite with the largest market share, to ascertain their accuracy and determine whether faces with certain skin tones are more readily flagged for cheating. Finally, we offer recommendations to improve the integrity and fairness of the remotely proctored exam experience.

By Avi Ginsberg, Ben Burgess, Edward W. Felten and Shaanan Cohney for arXiv.org on May 6, 2022

What problems are AI-systems even solving? “Apparently, too few people ask that question”

In this interview with Felienne Hermans, Professor Computer Science at the Vrije Universiteit Amsterdam, she discusses the sore lack of divesity in the white male-dominated world of programming, the importance of teaching people how to code and, the problematic uses of AI-systems.

Continue reading “What problems are AI-systems even solving? “Apparently, too few people ask that question””

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑