GPT detectors are biased against non-native English writers

The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse.

By Eric Wu, James Zou, Mert Yuksekgonul, Weixin Liang and Yining Mao for arXiv.org on April 18, 2023

Enough is Enough. Tell Congress to Ban Federal Use of Face Recognition

Cities and counties across the country have banned government use of face surveillance technology, and many more are weighing proposals to do so. From Boston to San Francisco, Jackson, Mississippi to Minneapolis, elected officials and activists know that face surveillance gives police the power to track us wherever we go. It also disproportionately impacts people of color, turns us all into perpetual suspects, increases the likelihood of being falsely arrested, and chills people’s willingness to participate in first amendment protected activities. Even Amazon, known for operating one of the largest video surveillance networks in the history of the world, extended its moratorium on selling face recognition to police.

By Matthew Guariglia for Electronic Frontier Foundation (EFF) on April 4, 2023

Racist Technology in Action: You look similar to someone we didn’t like → Dutch visa denied

Ignoring earlier Dutch failures in automated decision making, and ignoring advice from its own experts, the Dutch ministry of Foreign Affairs has decided to cut costs and cut corners through implementing a discriminatory profiling system to process visa applications.

Continue reading “Racist Technology in Action: You look similar to someone we didn’t like → Dutch visa denied”

Hoe Nederland A.I. inzet voor etnisch profileren

China dat kunstmatige intelligentie inzet om Oeigoeren te onderdrukken: klinkt als een ver-van-je-bed-show? Ook Nederland (ver)volgt specifieke bevolkingsgroepen met algoritmes. Zoals in Roermond, waar camera’s alarm slaan bij auto’s met een Oost-Europees nummerbord.

By Florentijn van Rootselaar for OneWorld on January 14, 2021

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑