In this issue of Logic, issue editor, J. Khadijah Abdurahman and André Brock Jr., associate professor of Black Digital Studies at Georgia Institute of Technology and the author of Distributed Blackness: African American Cybercultures converse about the history of disinformation from reconstruction to the present, and discuss “the unholy trinity of whiteness, modernity, and capitalism”.
Continue reading “Disinformation and anti-Blackness”(dis)Info Studies: André Brock, Jr. on Why People Do What They Do on the Internet
A conversation about the unholy trinity of whiteness, modernity, and capitalism.
By André Brock for Logic on December 25, 2021
Regulating big tech to make sure nobody is excluded
Our very own Naomi Appelman was interviewed for Atlas, a Dutch television show about science and current affairs. She talked about her research into what laws and regulations democracies should develop to ensure that large technology companies don’t unnecessarily exclude people.
Continue reading “Regulating big tech to make sure nobody is excluded”Proof for Twitter’s bias toward lighter faces
We have written about the racist cropping algorithm that Twitter uses, and have shared how Twitter tried to fix the symptoms. Twitter also instituted an ‘algorithmic bug bounty’, asking researchers to prove bias in their algorithms.
Continue reading “Proof for Twitter’s bias toward lighter faces”Twitter’s algorithmic bias bug bounty could be the way forward, if regulators step in
Twitter opened its image cropping algorithm and gave prizes to people who could find biases in it. While interesting in itself, the program mostly reveals the impotence of regulators.
By Nicolas Kayser-Bril for AlgorithmWatch on August 17, 2021
Student proves Twitter algorithm ‘bias’ toward lighter, slimmer, younger faces
Company pays $3,500 to Bogdan Kulynych who demonstrated flaw in image cropping software.
By Alex Hern for The Guardian on August 10, 2021
Onderzoek door Defcon-bezoekers bevestigt vooroordelen in algoritme van Twitter
Er zitten vooroordelen in een algoritme van Twitter, dat ontdekten onderzoekers tijdens een algorithmic bias bounty-competitie op Defcon. Zo worden onder meer foto’s van ouderen en mensen met een beperking weggefilterd in Twitters croptool.
By Stephan Vegelien for Tweakers on August 10, 2021
Are we automating racism?
Vox host Joss Fong wanted to know… “Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?”
Continue reading “Are we automating racism?”A ‘safe space for racists’: antisemitism report criticises social media giants
Facebook, Twitter, Instagram, YouTube and TikTok failing to act on most reported anti-Jewish posts, says study.
By Maya Wolfe-Robinson for The Guardian on August 1, 2021
Are We Automating Racism?
Many of us assume that tech is neutral, and we have turned to tech as a way to root out racism, sexism, or other “isms” plaguing human decision-making. But as data-driven systems become a bigger and bigger part of our lives, we also notice more and more when they fail, and, more importantly, that they don’t fail on everyone equally. Glad You Asked host Joss Fong wants to know: Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?
From YouTube on March 31, 2021
Tech companies poured 3.8 billion USD into racial justice, but to what avail?
The Plug and Fast Company looked at what happened to the 3.8 billion dollars that US-based tech companies committed to diversity, equity, and inclusion as their response to the Black Lives Matter protests.
Continue reading “Tech companies poured 3.8 billion USD into racial justice, but to what avail?”Twitter rolls out bigger images and cropping control on iOS and Android
Twitter just made a change to the way it displays images that has visual artists on the social network celebrating.
By Taylor Hatmaker for TechCrunch on May 6, 2021
Twitter will share how race and politics shape its algorithms
The company is considering how its use of machine learning may reinforce existing biases.
By Anna Kramer for Protocol on April 14, 2021
Racist technology in action: Cropping out the non-white
A recent, yet already classic, example of racist technology is Twitter’s photo cropping machine learning algorithm. The algorithm was shown to consistently preference white faces in the cropped previews of pictures.
Continue reading “Racist technology in action: Cropping out the non-white”Quick test to see if Twitter’s cropping algorithm is still racist
Yup, still racist.
By Anthony Tordillos for Twitter on December 7, 2020
Why Twitter is (Epistemically) Better Than Facebook
Philosopher Dr. Natalie Ashton delves into the epistemic pitfalls of Facebook and the epistemic merits of Twitter.
By Natalie Ashton for Logically on November 26, 2020
Dataminr Targets Communities of Color for Police
Insiders say Dataminr’s “algorithmic” Twitter search involves human staffers perpetuating confirmation biases.
By Sam Biddle for The Intercept on October 21, 2020
Twitter apologises for ‘racist’ image-cropping algorithm
Users highlight examples of feature automatically focusing on white faces over black ones.
By Alex Hern for The Guardian on September 21, 2020