Attempts to eliminate bias through diversifying datasets? A distraction from the root of the problem

In this eloquent and haunting piece by Hito Steyerl, she weaves the ongoing narratives of the eugenicist history of statistics with its integration into machine learning. She elaborates why the attempts to eliminate bias in facial recognition technology through diversifying datasets obscures the root of the problem: machine learning and automation are fundamentally reliant on extracting and exploiting human labour.

Continue reading “Attempts to eliminate bias through diversifying datasets? A distraction from the root of the problem”

Racist Technology in Action: Image recognition is still not capable of differentiating gorillas from Black people

If this title feels like a deja-vu it is because you most likely have, in fact, seen this before (perhaps even in our newsletter). It was back in 2015 that the controversy first arose when Google released image recognition software that kept mislabelling Black people as gorillas (read here and here).

Continue reading “Racist Technology in Action: Image recognition is still not capable of differentiating gorillas from Black people”

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑