The company is considering how its use of machine learning may reinforce existing biases.
By Anna Kramer for Protocol on April 14, 2021
The company is considering how its use of machine learning may reinforce existing biases.
By Anna Kramer for Protocol on April 14, 2021
In this piece for Markup, Leon Yin and Aaron Sankin expose how Google bans advertisers from targeting terms such as “Black lives matter”, “antifascist” or “Muslim fashion”. At the same time, keywords such as “White lives matter” or “Christian fashion” are not banned. When they raised this striking discrepancy with Google, its response was to fix the discrepancies between religions and races by blocking all such terms, as well as by blocking even more social justice related keywords such as “I can’t breathe” or “LGBTQ”. Blocking these terms for ad placement can reduce the revenue for YouTuber’s fighting for these causes. Yin and Sankin place this policy in stark contrast to Google’s support for the Black Lives Matter movement.
Continue reading “Google blocks advertisers from targeting Black Lives Matter”A recent report by ADL, an anti-hate organisation in the US, has shown that social media platforms have consistently failed to prevent online hate and harassment. Despite the self-regulatory efforts made by social media companies, results from ADL’s annual survey shows that the level of online hate and harassment has barely shifted in the past three years. These online experiences disproportionately harm marginalised groups, with LGBTQI+, Asian-American, Jewish and African-American respondents reporting higher rates of various forms of harassment. Many of these problems are intrinsic to the ways in which the business models of social media platforms are optimised for maximum engagement, further exacerbating existing issues in society.
Continue reading “Online hate and harassment continue to proliferate”Evidence suggests that there has been an uptick in the amount of prejudice against East Asia during the COVID-19 pandemic.
From The Alan Turing Institute on May 8, 2020
The Oversight Board has upheld Facebook’s decision to remove specific content that violated the express prohibition on posting caricatures of Black people in the form of blackface, contained in its Hate Speech Community Standard.
From Oversight Board on April 13, 2021
Het weren van beelden van Zwarte Piet past in het beleid van Facebook om racistische blackface-stereotypen op zijn platforms tegen te gaan. Dat oordeelt een externe raad bij wie gebruikers en Facebook zelf kunnen toetsen of iets terecht wordt verwijderd of niet.
By Pieter Sabel for Volkskrant on April 13, 2021
Algorithm systematically removes their content or limits how much it can earn from advertising, they allege.
By Reed Albergotti for Washington Post on June 18, 2020
For a Markup feature, Leon Yin and Aaron Sankin compiled a list of “social and racial justice terms” with help from Color of Change, Media Justice, Mijente and Muslim Advocates, then checked if YouTube would let them target those terms for ads.
By Cory Doctorow for Pluralistic on April 10, 2021
“Black power” and “Black Lives Matter” can’t be used to find videos for ads, but “White power” and “White lives matter” were just fine.
By Aaron Sankin and Leon Yin for The Markup on April 9, 2021
Asian-Americans experienced the largest single rise in severe online hate and harassment year-over-year in comparison to other groups, with 17 percent having experienced sexual harassment, stalking, physical threats, swatting, doxing or sustained harassment this year compared to 11 percent last year, according to a survey released by ADL (the Anti-Defamation League). Fully half (50 percent) of Asian-American respondents who were harassed reported that the harassment was because of their race or ethnicity.
From ADL on March 24, 2021
The left must vie for control over the algorithms, data, and infrastructure that shape our lives.
By Meredith Whittaker and Nantina Vgontzas for The Nation on January 29, 2021
Since 2017, Mozilla – the makers of the Firefox browser – have written a yearly report on the health of the internet. This year’s report focuses on labor rights, transparency and racial justice. The piece about racial justice makes an interesting argument about how the sites we see on the first page of a search engine are a reflection of the general popularity of these sites or their ability to pay for a top result. This leads to a ‘mainstream’ bias.
Continue reading “The internet doesn’t have ‘universal’ users”Emails show that the LAPD repeatedly asked camera owners for footage during the demonstrations, raising First Amendment concerns.
By Sam Biddle for The Intercept on February 16, 2021
Enabling Apple’s “Limit Adult Websites” filter in the iOS Screen Time setting will block users from seeing any Google search results for “Asian” in any browser on their iPhone. That’s not great, folks.
By Victoria Song for Gizmodo on February 4, 2021
Facebook placed a number of leftwing organizers on a restricted list during Biden’s inauguration. It’s part of a much bigger problem.
By Akin Olla for The Guardian on January 29, 2021
Apply to participate in Data & Society’s academic workshop, The Hustle Economy: Race, Gender and Digital Entrepreneurship. This online collaborative program on May 20, 2021 will have space for both deep dives into academic works-in-progress as well as multidisciplinary discussions of alternative practitioner projects that contribute to the understanding of hustle economies and their embodiments. Data & Society’s Director of Research and Associate Professor of Anthropology at the University of Washington Sareeta Amrute, Associate Professor at the University of North Carolina at Chapel Hill School of Information and Library Science Tressie McMillan Cottom, and Assistant Professor of Media Studies at the University of Virginia Lana Swartz invite applications from project leads to workshop their academic papers, podcasts, chapters, data mappings, and so on, and from collaborators to prepare interdisciplinary feedback on the selected works-in-progress. Together, we’ll help develop this emerging field centered on the lived experience, blunders, and promises of the digital economy.
From Data & Society on January 26, 2021
In light of the Black Lives Matter protests in the U.S. and protests against police brutality in Europe, technology companies have been quick to release corporate statements, commitments, campaigns and initiatives to tackle discrimination and racial injustice. Amber Hamilton evaluated 63 public facing documents from major technology companies such as Facebook, Instagram, Twitter, YouTube, Airbnb and TikTok.
Continue reading “Corporatespeak and racial injustice”A recent, yet already classic, example of racist technology is Twitter’s photo cropping machine learning algorithm. The algorithm was shown to consistently preference white faces in the cropped previews of pictures.
Continue reading “Racist technology in action: Cropping out the non-white”Philosopher Dr. Natalie Ashton delves into the epistemic pitfalls of Facebook and the epistemic merits of Twitter.
By Natalie Ashton for Logically on November 26, 2020
The Oxford Internet Institute hosts Lisa Nakamura, Director Digital Studies Institute, Gwendolyn Calvert Baker Collegiate Professor, Department of American Culture, University of Michigan, Ann Arbor. Professor Nakamura is the founding Director of the Digital Studies Institute at the University of Michigan, and a writer focusing on digital media, race, and gender. ‘We are living in an open-ended crisis with two faces: unexpected accelerated digital adoption and an impassioned and invigorated racial justice movement. These two vast and overlapping cultural transitions require new inquiry into the entangled and intensified dialogue between race and digital technology after COVID. My project analyzes digital racial practices on Facebook, Twitter, Zoom, and TikTok while we are in the midst of a technological and racialized cultural breaking point, both to speak from within the crisis and to leave a record for those who come after us. How to Understand Digital Racism After COVID-19 contains three parts: Methods, Objects, and Making, designed to provide humanists and critical social scientists from diverse disciplines or experience levels with pragmatic and easy to use tools and methods for accelerated critical analyses of the digital racial pandemic.’
From YouTube on November 12, 2020
Did a newsletter company create a more equitable media system—or replicate the flaws of the old one?
By Clio Chang for Columbia Journalism Review
The platform is overrun with hate speech and disinformation. Does it actually want to solve the problem?
By Andrew Marantz for The New Yorker on October 12, 2020
Lilian Stolk interviews internet policy consultant Joe McNamee on Facebook’s content moderation
By Lilian Stolk for The Hmm on November 16, 2020
An analysis of 63 recent statements shows that US tech companies repeatedly placed responsibility for racial injustice on Black people.
By Amber M. Hamilton for MIT Technology Review on September 5, 2020
The apps feed a false promise of stability to immigrants and people of color. Instead, drivers receive low pay and no benefits.
By Erica Smiley for The Guardian on October 29, 2020
Facebook amends code after deletion of black users’ photos sparks outrage.
By Nosheen Iqbal for The Guardian on October 25, 2020
Celeste Barber’s latest parody photo was flagged by the platform, but its algorithm’s prejudices aren’t a new problem.
By Lacey-Jade Christie for The Guardian on October 19, 2020
Typhoon werd als zwarte rapper in een mooie auto aangehouden. Sindsdien is de discussie over etnisch profileren terecht losgebarsten. Er is daarbij zelden aandacht voor het feit dat de businessmodellen van diensten uit Silicon Valley grotendeels zijn gebaseerd op profilering, en dat etnisch profileren daarbij als innovatief marketinginstrument wordt aangeprezen.
By Hans de Zwart for Bits of Freedom on June 23, 2016
Google en YouTube geven informatie van Wikipedia weer. Dat gebeurt klakkeloos, schrijft Hans de Zwart. Als informatie niet klopt of als Wikipedia misbruikt wordt, grijpt Google niet in, of veel te laat.
By Hans de Zwart for NRC on August 8, 2018
Moderatie: Het Facebookbeleid tegen Zwarte Piet begint behoorlijk op stoom te komen. Pro-pietenpagina’s worden hard geraakt, omdat tegenstander de berichten op deze pagina’s volop rapporteren. Toch is het de vraag of Zwarte Piet ooit helemaal van Facebook verdwijnt.
By Reinier Kist and Wilfred Takken for NRC on August 31, 2020
We found discriminatory ads can still appear, despite Facebook’s efforts.
By Jeremy B. Merrill for The Markup on August 25, 2020
Over the past few years, we’ve routinely reviewed and refined our targeting options to make it easier for advertisers to find and use targeting that will deliver the most value for businesses and people. Today, we’re sharing an update on our ongoing review and streamlining the options we provide by removing options that are not widely used by advertisers.
From Facebook on August 11, 2020
Facebook heeft een advertentie met cover van het feministische maandblad OPZIJ offline gehaald omdat deze overeenkomsten zou vertonen met een blackface-afbeelding. Op de cover van het tijdschrift prijkt de beeltenis van Dr. Abbie Vandivere. De wetenschapper haalde de wereldpers met haar ontdekkingen tijdens de restauratie van Vermeer’s Meisje met de parel voor het Mauritshuis. Vandivere is zwart en heeft op de foto haar lippen rood geverfd.
By Mark Koster for Villamedia on August 17, 2020
#IwanttoseeNyome outcry after social media platform repeatedly removes pictures of Nyome Nicholas-Williams.
By Nosheen Iqbal for The Guardian on August 9, 2020
Algorithmic Copyright Management: Background Audio, False Positives and De facto Censorship
By Adam Holland and Nick Simmons for Lumen on July 21, 2020
In ignoring Facebook’s size, it gave the company a free pass to continue operating mostly as is. But the real audit may come later this month in Congress.
By Casey Newton and Zoe Schiffer for The Verge on July 10, 2020
Facebook is forming new internal teams dedicated to studying its main social network and Instagram for racial bias, in particular for whether its algorithms trained using artificial intelligence adversely affect Black, Hispanic, and other underrepresented groups.
By Nick Statt for The Verge on July 21, 2020
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.