The US Army has adopted an artificial intelligence tool called ‘CamoGPT’ to systematically remove and exclude references to diversity, equity, inclusion, and accessibility (DEIA) from its training materials. This initiative aligns with an executive order from President Trump, signed on January 27th, titled Restoring America’s Fighting Force, which mandates the elimination of policies perceived as promoting “un-American, divisive, discriminatory, radical, extremist, and irrational theories” concerning race and gender.
Continue reading “A GPT tool deployed by the US government further facilitates and normalises fascism”Racist Technology in Action: AI tenant screening fails the ‘fairness’ test
SafeRent Solutions, an AI-powered tenant screening company, settled a lawsuit alleging that its algorithm disproportionately discriminated against Black and Hispanic renters and those relying on housing vouchers.
Continue reading “Racist Technology in Action: AI tenant screening fails the ‘fairness’ test”Easily developed facial recognition glasses outline how underprepared we are for privacy violations
Two engineering students, Caine Ardayfio and AnnPhu Nguyen, at Harvard University developed real-time facial recognition glasses. They went testing it out on passengers in the Boston subway, and easily identified a former journalist and some of his articles. A great way to produce small-talk conversations of break the ice – you might think.
Continue reading “Easily developed facial recognition glasses outline how underprepared we are for privacy violations”Falsely Flagged: The AI-Driven Discrimination Black Students Face
Common Sense, an education platform that advocates and advises for an equitable and safe school environment, published a report last month on the adoption of generative AI at home and school. Parents, teachers, and children were surveyed to better understand the adoption and effects of the technology.
Continue reading “Falsely Flagged: The AI-Driven Discrimination Black Students Face”The datasets to train AI models need more checks for harmful and illegal materials
This Atlantic conversation between Matteo Wong and Abeba Birhane touches on some critical issues surrounding the use of large datasets to train AI models.
Continue reading “The datasets to train AI models need more checks for harmful and illegal materials”Ethnic profiling is a problem in all of the Dutch government
On the International Day against Racism and Discrimination, Amnesty International Netherlands published their new research on the lack of protection by the Dutch government against racial profiling. Amnesty calls for immediate action to address the pervasive issue of ethnic profiling in law enforcement practices.
Continue reading “Ethnic profiling is a problem in all of the Dutch government”