Racist Technology in Action: ChatGPT detectors are biased against non-native English writers

Students are using ChatGPT for writing their essays. Antiplagiarism tools are trying to detect whether a text was written by AI. It turns out that these type of detectors consistently misclassify the text of non-native speakers as AI-generated.

Turnitin is one of the companies that says it has AI detector abilities. They offer this product even though OpenAI, creators of ChatGPT, say they can’t get these detectors to work reliably:

Our research into detectors didn’t show them to be reliable enough given that educators could be making judgments about students with potentially lasting consequences.

Turnitin apparently has no qualms in giving educators a tool that will create many false accusations.

However, it gets worse. Researchers at Stanford have found that these false accusations aren’t spread out evenly over the complete (student) population. They evaluated the performance of several widely-used ChatGPT detectors:

Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions.

The authors of the study rightly conclude that there should be a “broader conversation about the ethical implications of deploying ChatGPT content detectors” (how about: don’t use them) and they “caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse.”

See: GPT detectors are biased against non-native English writers at arXiv.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑