We’ve written twice before about the racist impact of DUO’s student fraud detection efforts. The Dutch government has now decided to pay back all the fines and the study financing they held back for all students that were checked between 2012 and 2023.
Continue reading “Dutch government has to pay back 61 million euros to students who were discriminated against through DUO’s fraud profiling operation”Beyond Surveillance: The Case Against AI Proctoring & AI Detection
On September 18, 2024, as part of the BCcampus EdTech Sandbox Series, I presented my case against AI proctoring and AI detection. In this post you will learn about key points from my presentation and our discussion.
By Ian Linkletter for BCcampus on October 16, 2024
Falsely Flagged: The AI-Driven Discrimination Black Students Face
Common Sense, an education platform that advocates and advises for an equitable and safe school environment, published a report last month on the adoption of generative AI at home and school. Parents, teachers, and children were surveyed to better understand the adoption and effects of the technology.
Continue reading “Falsely Flagged: The AI-Driven Discrimination Black Students Face”Beyond Surveillance – The Case Against AI Detection and AI Proctoring
Are you an educator seeking a supportive space to critically examine AI surveillance tools? This workshop is for you. In an era where AI increasingly pervades education, AI detection and proctoring have sparked significant controversy. These tools, categorized as academic surveillance software, algorithmically monitor behaviour and movements. Students are increasingly forced to face them. Together, we will move beyond surveillance toward a culture of trust and transparency, shining a light on the black box of surveillance and discussing our findings. In this two-hour workshop, we will explore AI detection and proctoring through a 40-minute presentation, an hour of activities and discussion, and 20 minutes of group tool evaluation using a rubric.
By Ian Linkletter for BCcampus on September 18, 2024
Black Teens’ Schoolwork Twice As Likely To Be Falsely Flagged As AI-Generated
Black students are over twice as likely to be falsely accused of using AI tools to complete school assignments compared to their peers.
By Sara Keenan for POCIT on September 19, 2024
Face Detection, Remote Testing Software & Learning At Home While Black — Amaya’s Flashlight
Remote learning software, like most software, can be biased. Here’s what happened one student, Amaya, used a test proctoring app to take her lab quiz.
From YouTube on February 7, 2022
Students with a non-European migration background had a 3.0 times higher chance of receiving an unfounded home visit from the Dutch student grants fraud department
Last year, Investico revealed how DUO, the Dutch organization for administering student grants, was using a racist algorithm to decide which students would get a home visit to check for fraudulent behaviour. The Minister of Education immediately stopped the use of the algorithm.
Continue reading “Students with a non-European migration background had a 3.0 times higher chance of receiving an unfounded home visit from the Dutch student grants fraud department”Dutch Institute of Human Rights tells the government: “Test educational tools for possible discriminatory effects”
The Dutch Institute for Human Rights has commissioned research exploring the possible risks for discrimination and exclusion relating to the use of algorithms in education in the Netherlands.
Continue reading “Dutch Institute of Human Rights tells the government: “Test educational tools for possible discriminatory effects””Vervolgonderzoek bevestigt indirecte discriminatie controles uitwonendenbeurs
DUO heeft de onafhankelijke stichting Algorithm Audit vervolgonderzoek laten doen naar de manier waarop DUO tussen 2012 en 2023 controleerde of een student terecht studiefinanciering ontving voor uitwonende studenten of niet. De conclusies van het vervolgonderzoek bevestigen dat studenten met een migratieachtergrond hierbij indirect zijn gediscrimineerd.
From Dienst Uitvoering Onderwijs (DUO) on May 21, 2024
AI detection has no place in education
The ubiquitous availability of AI has made plagiarism detection software utterly useless, argues our Hans de Zwart in the Volkskrant.
Continue reading “AI detection has no place in education”We’re Not Living a “Predicted” Life: Student Perspectives on Wisconsin’s Dropout Algorithm
Wisconsin took down its dropout predictions after a Markup investigation. Here’s what two students we featured have to say.
By Maurice Newton and Mia Townsend for The Markup on December 21, 2023
Kabinet maakt excuses voor discriminatie door DUO bij opsporing fraude
Bij de controles op fraude met de basisbeurs was inderdaad sprake van indirecte discriminatie, staat in een rapport over de praktijken van de Dienst Uitvoering Onderwijs. Het kabinet biedt excuses aan.
From Vox on March 4, 2024
Racist Technology in Action: ChatGPT detectors are biased against non-native English writers
Students are using ChatGPT for writing their essays. Antiplagiarism tools are trying to detect whether a text was written by AI. It turns out that these type of detectors consistently misclassify the text of non-native speakers as AI-generated.
Continue reading “Racist Technology in Action: ChatGPT detectors are biased against non-native English writers”Dutch Higher Education continues to use inequitable proctoring software
In October last year, RTL news showed that Proctorio’s software, used to check if students aren’t cheating during online exams, works less for students of colour. Five months later, RTL asked the twelve Dutch educational institutions on Proctorio’s client list whether they were still using the tool. Eight say they still do.
Continue reading “Dutch Higher Education continues to use inequitable proctoring software”Vrije Universiteit vrijgepleit van discriminatie met systeem voor gezichtsherkenning
Ophef: Veel problemen ontstonden volgens de VU niet door de gezichtsherkenning, maar door een haperende verbinding.
By Sjoerd de Jong for NRC on January 10, 2024
Late Night Talks: Studenten slepen universiteit voor de rechter vanwege discriminerende AI-software
Vrije Universiteit Amsterdam student Robin Pocornie en Naomi Appelman, co-founder van non-profitorganisatie Racism and Technology Center, gaan met elkaar in gesprek over discriminatie binnen kunstmatige intelligentie (artificial intelligence). Wat zijn de voor- en nadelen van kunstmatige intelligentie en in hoeverre hebben we grip en hoe kunnen we discriminatie tegengaan in de snelle ontwikkelingen van technologie?
By Charisa Chotoe, Naomi Appelman and Robin Pocornie for YouTube on December 3, 2023
Judgement of the Dutch Institute for Human Rights shows how difficult it is to legally prove algorithmic discrimination
On October 17th, the Netherlands Institute for Human Rights ruled that the VU did not discriminate against bioinformatics student Robin Pocornie on the basis of race by using anti-cheating software. However, according to the institute, the VU has discriminated on the grounds of race in how they handled her complaint.
Continue reading “Judgement of the Dutch Institute for Human Rights shows how difficult it is to legally prove algorithmic discrimination”Waarom we zwarte vrouwen meer zouden moeten geloven dan techbedrijven
Stel je voor dat bedrijven technologie bouwen die fundamenteel racistisch is: het is bekend dat die technologie voor zwarte mensen bijna 30 procent vaker niet werkt dan voor witte mensen. Stel je vervolgens voor dat deze technologie wordt ingezet op een cruciaal gebied van je leven: je werk, onderwijs, gezondheidszorg. En stel je tot slot voor dat je een zwarte vrouw bent en dat de technologie werkt zoals verwacht: niet voor jou. Je dient een klacht in. Om vervolgens van de nationale mensenrechteninstantie te horen dat het in dit geval waarschijnlijk geen racisme was.
By Nani Jansen Reventlow for Volkskrant on October 22, 2023
Judgement of the Dutch Institute for Human Rights shows how difficult it is to legally prove algorithmic discrimination
Today, the Netherlands Institute for Human Rights ruled that the VU did not discriminate against bioinformatics student Robin Pocornie on the basis of race by using anti-cheating software. However, the VU has discriminated on the grounds of race when handling her complaint.
Continue reading “Judgement of the Dutch Institute for Human Rights shows how difficult it is to legally prove algorithmic discrimination”Uitspraak College voor de Rechten van de Mens laat zien hoe moeilijk het is om algoritmische discriminatie juridisch te bewijzen
Vandaag heeft het College van de Rechten van de Mens geoordeeld dat de VU de student bioinformatica Robin Pocornie niet heeft gediscrimineerd op basis van ras door de inzet van antispieksoftware. Wel heeft de VU verboden onderscheid op grond van ras gemaakt bij de klachtbehandeling.
Continue reading “Uitspraak College voor de Rechten van de Mens laat zien hoe moeilijk het is om algoritmische discriminatie juridisch te bewijzen”Zwarte mensen vaker niet herkend door antispieksoftware Proctorio
Gezichten van mensen met een zwarte huidskleur worden veel minder goed herkend door tentamensoftware Proctorio, blijkt uit onderzoek van RTL Nieuws. De software, die fraude moet herkennen, zoekt bij online tentamens naar het gezicht van een student. Dat zwarte gezichten beduidend slechter worden herkend, leidt tot discriminatie, zeggen deskundigen die het onderzoek van RTL Nieuws beoordeelden.
By Stan Hulsen for RTL Nieuws on October 7, 2023
Proctoring software uses fudge-factor for dark skinned students to adjust their suspicion score
Respondus, a vendor of online proctoring software, has been granted a patent for their “systems and methods for assessing data collected by automated proctoring.” The patent shows that their example method for calculating a risk score is adjusted on the basis of people’s skin colour.
Continue reading “Proctoring software uses fudge-factor for dark skinned students to adjust their suspicion score”Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?
In its online series of digital dilemmas, Al Jazeera takes a look at AI in relation to social inequities. Loyal readers of this newsletter will recognise many of the examples they touch on, like how Stable Diffusion exacerbates and amplifies racial and gender disparities or the Dutch childcare benefits scandal.
Continue reading “Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?”Algorithm to help find fraudulent students turns out to be racist
DUO is the Dutch organisation for administering student grants. It uses an algorithm to help them decide which students get a home visit to check for fraudulent behaviour. Turns out they basically only check students of colour, and they have no clue why.
Continue reading “Algorithm to help find fraudulent students turns out to be racist”De fraudejacht van Duo treft bijna alleen studenten met een migratieachtergrond
De jacht op vermeende fraudeurs door studiefinancieringverstrekker Duo treft bijna alleen studenten met een migratieachtergrond. Duo is zich van geen kwaad bewust en wil in september het aantal controles verviervoudigen.
By Anouk Kootstra, Bas Belleman and Belia Heilbron for De Groene Amsterdammer on June 21, 2023
Countering Discriminatory e-proctoring systems
In this session, we explored how the EU Charter right to non-discrimination can be (and has been) used to fight back against discriminatory e-proctoring systems.
By Naomi Appelman and Robin Pocornie for Digital Freedom Fund on May 31, 2023
GPT detectors are biased against non-native English writers
The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse.
By Eric Wu, James Zou, Mert Yuksekgonul, Weixin Liang and Yining Mao for arXiv.org on April 18, 2023
Doing an exam as if “driving at night with a car approaching from the other direction with its headlights on full-beam”
Robin Pocornie’s complaint against the VU for their use of Proctorio, which had trouble detecting her face as a person of colour, is part of larger and international story as an article in Wired shows.
Continue reading “Doing an exam as if “driving at night with a car approaching from the other direction with its headlights on full-beam””Watching the watchers: bias and vulnerability in remote proctoring software
Educators are rapidly switching to remote proctoring and examination software for their testing needs, both due to the COVID-19 pandemic and the expanding virtualization of the education sector. State boards are increasingly utilizing these software for high stakes legal and medical licensing exams. Three key concerns arise with the use of these complex software: exam integrity, exam procedural fairness, and exam-taker security and privacy. We conduct the first technical analysis of each of these concerns through a case study of four primary proctoring suites used in U.S. law school and state attorney licensing exams. We reverse engineer these proctoring suites and find that despite promises of high-security, all their anti-cheating measures can be trivially bypassed and can pose significant user security risks. We evaluate current facial recognition classifiers alongside the classifier used by Examplify, the legal exam proctoring suite with the largest market share, to ascertain their accuracy and determine whether faces with certain skin tones are more readily flagged for cheating. Finally, we offer recommendations to improve the integrity and fairness of the remotely proctored exam experience.
By Avi Ginsberg, Ben Burgess, Edward W. Felten and Shaanan Cohney for arXiv.org on May 6, 2022
Remote Learning Accidentally Introduced a New Danger for LGBTQ Students
It’s become increasingly difficult to know when your secrets are safe.
By Alejandra Caraballo for Slate Magazine on February 24, 2022
False Alarm: How Wisconsin Uses Race and Income to Label Students “High Risk”
The Markup found the state’s decade-old dropout prediction algorithms don’t work and may be negatively influencing how educators perceive students of color.
By Todd Feathers for The Markup on April 27, 2023
Company that makes millions spying on students will get to sue a whistleblower
Yesterday, the Court of Appeal for British Columbia handed down a jaw-droppingly stupid and terrible decision, rejecting the whistleblower Ian Linkletter’s claim that he was engaged in legitimate criticism when he linked to freely available materials from the ed-tech surveillance company Proctorio.
By Cory Doctorow for Pluralistic on April 20, 2023
What problems are AI-systems even solving? “Apparently, too few people ask that question”
In this interview with Felienne Hermans, Professor Computer Science at the Vrije Universiteit Amsterdam, she discusses the sore lack of divesity in the white male-dominated world of programming, the importance of teaching people how to code and, the problematic uses of AI-systems.
Continue reading “What problems are AI-systems even solving? “Apparently, too few people ask that question””This Student Is Taking On ‘Biased’ Exam Software
Mandatory face-recognition tools have repeatedly failed to identify people with darker skin tones. One Dutch student is fighting to end their use.
By Morgan Meaker and Robin Pocornie for WIRED on April 5, 2023
ExamSoft’s proctoring software has a face-detection problem
A professor at Suffolk University Law School shares a bypass to an invasive feature of the ExamSoft testing software, and urges the company to change, in a new report.
By Monica Chin for The Verge on January 6, 2021
Hoogleraar computerwetenschappen vreest opmars AI: ‘Wilt u 50 euro extra betalen voor een mens? Toets 1’
Programmeren is een mannending, nog altijd. Hoogleraar computerwetenschappen Felienne Hermans wil daarin verandering brengen. Ondertussen ligt ze ’s nachts wakker van het brede arsenaal aan ellende dat nieuwe AI-toepassingen als ChatGPT teweegbrengen.
By Felienne Hermans and Laurens Verhagen for Volkskrant on March 16, 2023
Paneldiscussie over racisme in AI: ‘Kunstmatige intelligentie houdt ons een spiegel voor’
Hoe dragen algoritmen bij aan racisme? En wat zijn de gevolgen? Die vragen kwamen aan bod tijdens een paneldiscussie woensdagmiddag op Science Park. ‘We moeten een “safe space” creëren waarin bedrijven transparant durven te zijn zonder gelijk afgestraft te worden.’
By Sija van den Beukel for Folia on March 16, 2023
First Dutch citizen proves that an algorithm discriminated against her on the basis of her skin colour
Robin Pocornie was featured in the Dutch current affairs programme EenVandaag. Professor Sennay Ghebreab and former Member of Parliament Kees Verhoeven provided expertise and commentary.
Continue reading “First Dutch citizen proves that an algorithm discriminated against her on the basis of her skin colour”South Africa’s poorest are staying up all night for cheaper internet rates
The side effects of sleep deprivation are wreaking havoc on daytime life.
By Audrey Simango and Ray Mwareya for Rest of World on February 7, 2023
Dutch Institute for Human Rights speaks about Proctorio at Dutch Parliament
In a roundtable on artificial intelligence in the Dutch Parliament, Quirine Eijkman spoke on behalf of the Netherlands Institute for Human Rights about Robin Pocornie’s case against the discriminatory use of Proctiorio at the VU university.
Continue reading “Dutch Institute for Human Rights speaks about Proctorio at Dutch Parliament”