The MIT Technology Review shows how the models of major AI companies, like OpenAI’s ChatGPT, reflect India’s caste bias.
Nilesh Christopher writes:
While AI companies are working to mitigate race and gender biases to some extent, they are less focused on non-Western concepts such as caste, a centuries-old Indian system that separates people into four categories: Brahmins (priests), Kshatriya (warriors), Vaishyas (merchants), and Shudras (laborers). Outside of this hierarchy are the Dalits, who were treated as “outcastes” and stigmatized as polluting and impure.
His article gives some examples of what this looks like:

The Review had to resort to a specialised measurement instrument to assess caste bias, as the standard benchmarking tools for measuring social bias in AI models do not account for caste.
Even though discrimination based on caste has been illegal in India since its independence in 1947, even a fleeting knowledge of India should help in not being surprised by the fact that AI models reflect the caste-based discrimination that is deeply entrenched in Indian society. However, it is surprising that even a user base as large as India’s can’t compel AI companies to be sensitive to local issues.
See: OpenAI is huge in India. Its models are steeped in caste bias. at MIT Technology Review.
Images from the original article.
