In this issue of Logic, issue editor, J. Khadijah Abdurahman and André Brock Jr., associate professor of Black Digital Studies at Georgia Institute of Technology and the author of Distributed Blackness: African American Cybercultures converse about the history of disinformation from reconstruction to the present, and discuss “the unholy trinity of whiteness, modernity, and capitalism”.Continue reading “Disinformation and anti-Blackness”
A conversation about the unholy trinity of whiteness, modernity, and capitalism.
By André Brock for Logic on December 25, 2021
Our very own Naomi Appelman was interviewed for Atlas, a Dutch television show about science and current affairs. She talked about her research into what laws and regulations democracies should develop to ensure that large technology companies don’t unnecessarily exclude people.Continue reading “Regulating big tech to make sure nobody is excluded”
We have written about the racist cropping algorithm that Twitter uses, and have shared how Twitter tried to fix the symptoms. Twitter also instituted an ‘algorithmic bug bounty’, asking researchers to prove bias in their algorithms.Continue reading “Proof for Twitter’s bias toward lighter faces”
Twitter opened its image cropping algorithm and gave prizes to people who could find biases in it. While interesting in itself, the program mostly reveals the impotence of regulators.
By Nicolas Kayser-Bril for AlgorithmWatch on August 17, 2021
Company pays $3,500 to Bogdan Kulynych who demonstrated flaw in image cropping software.
By Alex Hern for The Guardian on August 10, 2021
Er zitten vooroordelen in een algoritme van Twitter, dat ontdekten onderzoekers tijdens een algorithmic bias bounty-competitie op Defcon. Zo worden onder meer foto’s van ouderen en mensen met een beperking weggefilterd in Twitters croptool.
By Stephan Vegelien for Tweakers on August 10, 2021
Vox host Joss Fong wanted to know… “Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?”Continue reading “Are we automating racism?”
Facebook, Twitter, Instagram, YouTube and TikTok failing to act on most reported anti-Jewish posts, says study.
By Maya Wolfe-Robinson for The Guardian on August 1, 2021
Many of us assume that tech is neutral, and we have turned to tech as a way to root out racism, sexism, or other “isms” plaguing human decision-making. But as data-driven systems become a bigger and bigger part of our lives, we also notice more and more when they fail, and, more importantly, that they don’t fail on everyone equally. Glad You Asked host Joss Fong wants to know: Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?
From YouTube on March 31, 2021
The Plug and Fast Company looked at what happened to the 3.8 billion dollars that US-based tech companies committed to diversity, equity, and inclusion as their response to the Black Lives Matter protests.Continue reading “Tech companies poured 3.8 billion USD into racial justice, but to what avail?”
Twitter just made a change to the way it displays images that has visual artists on the social network celebrating.
By Taylor Hatmaker for TechCrunch on May 6, 2021
The company is considering how its use of machine learning may reinforce existing biases.
By Anna Kramer for Protocol on April 14, 2021
A recent, yet already classic, example of racist technology is Twitter’s photo cropping machine learning algorithm. The algorithm was shown to consistently preference white faces in the cropped previews of pictures.Continue reading “Racist technology in action: Cropping out the non-white”
Philosopher Dr. Natalie Ashton delves into the epistemic pitfalls of Facebook and the epistemic merits of Twitter.
By Natalie Ashton for Logically on November 26, 2020
Insiders say Dataminr’s “algorithmic” Twitter search involves human staffers perpetuating confirmation biases.
By Sam Biddle for The Intercept on October 21, 2020