Black Twitter is vital as a space for Black folk to create, maintain, and discuss the Black everyday in a way that reaffirms connection, and often joy.
From MSNBC News on July 17, 2023
Black Twitter is vital as a space for Black folk to create, maintain, and discuss the Black everyday in a way that reaffirms connection, and often joy.
From MSNBC News on July 17, 2023
Wat je in zelflerende AI-systemen stopt, krijg je terug. Technologie, veelal ontwikkeld door witte mannen, versterkt en verbergt daardoor de vooroordelen. Met name vrouwen (van kleur) luiden de alarmbel.
By Marieke Rotman, Nani Jansen Reventlow, Oumaima Hajri and Tanya O’Carroll for De Groene Amsterdammer on July 12, 2023
As EU institutions start decisive meetings on the Artificial Intelligence (AI) Act, a broad civil society coalition is urging them to prioritise people and fundamental rights.
From European Digital Rights (EDRi) on July 12, 2023
Tech companies acknowledge machine-learning algorithms can perpetuate discrimination and need improvement.
By Zachary Small for The New York Times on July 4, 2023
Younger voices are using technology to respond to the needs of marginalized communities and nurture Black healing and liberation.
By Kenia Hale, Nate File and Payton Croskey for Boston Review on June 2, 2022
An automated cash transfer program in Jordan developed with significant financing from the World Bank is undermined by errors, discriminatory policies, and stereotypes about poverty.
By Amos Toh for Human Rights Watch on June 13, 2023
According to a new report by the Human Rights Watch, an algorithmic welfare distribution system funded by the World Bank unfairly and inaccurately quantifies poverty.
By Tate Ryan-Mosley for MIT Technology Review on June 13, 2023
Afghan refugees’ asylum claims are being rejected because of bad AI translations of Pashto and Dari.
By Andrew Deck for Rest of World on April 19, 2023
De jacht op vermeende fraudeurs door studiefinancieringverstrekker Duo treft bijna alleen studenten met een migratieachtergrond. Duo is zich van geen kwaad bewust en wil in september het aantal controles verviervoudigen.
By Anouk Kootstra, Bas Belleman and Belia Heilbron for De Groene Amsterdammer on June 21, 2023
In summer 2021, sound artist, engineer, musician, and educator Johann Diedrick convened a panel at the intersection of racial bias, listening, and AI technology at Pioneerworks in Brooklyn, NY. Diedrick.
By Michelle Pfeifer for Sounding Out! on June 12, 2023
Text-to-image models amplify stereotypes about race and gender — here’s why that matters.
By Dina Bass and Leonardo Nicoletti for Bloomberg on June 1, 2023
In this session, we explored how the EU Charter right to non-discrimination can be (and has been) used to fight back against discriminatory e-proctoring systems.
By Naomi Appelman and Robin Pocornie for Digital Freedom Fund on May 31, 2023
Is English language the leading language of the internet? As of right now, English is the leading internet language, with Russian and Spanish following behind.
By Russell Brandom for Rest of World on June 7, 2023
The proto-Taylorist methods of worker control Charles Babbage encoded into his calculating engines have origins in plantation management.
By Meredith Whittaker for Logic on June 2, 2023
Weaving memory into computer systems and Yoruba divination chains.
By Zainab Aliyu for Logic on June 2, 2023
More and more prominent tech figures are voicing concerns about superintelligent AI and risks to the future of humanity. But as leading AI ethicist Timnit Gebru and researcher Émile P Torres point out, these ideologies have deeply racist foundations.
By Samara Linton for POCIT on May 24, 2023
On June 1, Democracy Now featured a roundtable discussion hosted by Amy Goodman and Nermeen Shaikh, with three experts on Artificial Intelligence (AI), about their views on AI in the world. They included Yoshua Bengio, a computer scientist at the Université de Montréal, long considered a “godfather of AI,” Tawana Petty, an organiser and Director of Policy at the Algorithmic Justice League (AJL), and Max Tegmark, a physicist at the Massachusetts Institute of Technology. Recently, the Future of Life Institute, of which Tegmark is president, issued an open letter “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Bengio is a signatory on the letter (as is Elon Musk). The AJL has been around since 2016, and has (along with other organisations) been calling for a public interrogation of racialised surveillance technology, the use of police robots, and other ways in which AI can be directly responsible for bodily harm and even death.
By Yasmin Nair for Yasmin Nair on June 3, 2023
The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse.
By Eric Wu, James Zou, Mert Yuksekgonul, Weixin Liang and Yining Mao for arXiv.org on April 18, 2023
Wie technologie enkel als vooruitgang ziet, vergeet een groep Nederlanders voor wie dat niet geldt. Dat zijn niet alleen mensen op leeftijd, zegt Aaron Mirck, maar ook slachtoffers van slimme algoritmes die de overheid gebruikt. Technologie is niet neutraal, betoogt hij.
By Aaron Mirck for Het Parool on May 27, 2023
Chinese social media, like Xiaohongshu, Kuaishou, and Douyin, are full of hundreds of users with American cop profile photos with the aim of taunting Black users.
By Viola Zhou for Rest of World on May 23, 2023
Eight years after a controversy over Black people being mislabeled as gorillas by image analysis software — and despite big advances in computer vision — tech giants still fear repeating the mistake.
By Kashmir Hill and Nico Grant for The New York Times on May 22, 2023
The Ethiopian-born computer scientist lost her job after pointing out the inequalities built into AI. But after decades working with technology companies, she knows all too much about discrimination.
By John Harris for The Guardian on May 22, 2023
Introducing the Monk Skin Tone (MST) Scale, one of the ways we are moving AI forward with more inclusive computer vision tools.
From Skin Tone at Google
Skin tone is an observable characteristic that is subjective, perceived differently by individuals (e.g., depending on their location or culture) and thus is complicated to annotate. That said, the ability to reliably and accurately annotate skin tone is highly important in computer vision. This became apparent in 2018, when the Gender Shades study highlighted that computer vision systems struggled to detect people with darker skin tones, and performed particularly poorly for women with darker skin tones. The study highlights the importance for computer researchers and practitioners to evaluate their technologies across the full range of skin tones and at intersections of identities. Beyond evaluating model performance on skin tone, skin tone annotations enable researchers to measure diversity and representation in image retrieval systems, dataset collection, and image generation. For all of these applications, a collection of meaningful and inclusive skin tone annotations is key.
By Candice Schumann and Gbolahan O. Olanubi for Google AI Blog on May 15, 2023
An artist considers a new form of machinic representation: the statistical rendering of large datasets, indexed to the probable rather than the real of photography; to the uncanny composite rather than the abstraction of the graph.
By Hito Steyerl for New Left Review on April 28, 2023
Lab will advance assessments of AI systems in the public interest.
From Data & Society on May 10, 2023
Whistleblower reveals Netherlands’ use of secret and potentially illegal algorithm to score visa applicants.
By Ariadne Papagapitos, Carola Houtekamer, Crofton Black, Daniel Howden, Gabriel Geiger, Klaas van Dijken, Merijn Rengers and Nalinee Maleeyakul for Lighthouse Reports on April 24, 2023
The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.
By Angelina McMillan-Major, Emily M. Bender, Margaret Mitchell and Timnit Gebru for DAIR on March 31, 2023
Because of a bad facial recognition match and other hidden technology, Randal Reid spent nearly a week in jail, falsely accused of stealing purses in a state he said he had never even visited.
By Kashmir Hill and Ryan Mac for The New York Times on March 31, 2023
OpenAI’s contractor workforce helps power ChatGPT through simple interactions. They don’t get benefits, but some say the work is rewarding.
By David Ingram for NBC News on May 6, 2023
The former Googler and current Signal president on why she thinks Geoffrey Hinton’s alarmism is a distraction from more pressing threats.
By Meredith Whittaker and Wilfred Chan for Fast Company on May 5, 2023
Educators are rapidly switching to remote proctoring and examination software for their testing needs, both due to the COVID-19 pandemic and the expanding virtualization of the education sector. State boards are increasingly utilizing these software for high stakes legal and medical licensing exams. Three key concerns arise with the use of these complex software: exam integrity, exam procedural fairness, and exam-taker security and privacy. We conduct the first technical analysis of each of these concerns through a case study of four primary proctoring suites used in U.S. law school and state attorney licensing exams. We reverse engineer these proctoring suites and find that despite promises of high-security, all their anti-cheating measures can be trivially bypassed and can pose significant user security risks. We evaluate current facial recognition classifiers alongside the classifier used by Examplify, the legal exam proctoring suite with the largest market share, to ascertain their accuracy and determine whether faces with certain skin tones are more readily flagged for cheating. Finally, we offer recommendations to improve the integrity and fairness of the remotely proctored exam experience.
By Avi Ginsberg, Ben Burgess, Edward W. Felten and Shaanan Cohney for arXiv.org on May 6, 2022
Algoritmen: Steeds zijn het gemarginaliseerde groepen die vaker dan anderen worden geraakt door digitalisering, schrijven Evelyn Austin en Nadia Benaissa.
By Evely Austin and Nadia Benaissa for NRC on May 5, 2023
Buskruit was een geweldig slimme uitvinding en kent goede én slechte toepassingen. Zullen we later op dezelfde manier naar kunstmatige intelligentie kijken?
By Claes de Vreese, Hind Dekker-Abdulaziz, Ilyaz Nasrullah, Martijn Bertisen, Nienke Schipper and Oumaima Hajri for Trouw on May 2, 2023
International Face Performance Conference (IFPC) 2022
By Yevgeniy B. Sirotin for NIST Pages on November 1, 2022
Visumbeleid: De papiermolen rond visumaanvragen detacheert Buitenlandse Zaken zo veel mogelijk naar buitenlandse bedrijven. Maar het risico op ongelijke behandeling door profilering van aanvragers blijft bestaan. Kritiek daarover van de interne privacy-toezichthouder, werd door het ministerie in de wind geslagen.
By Carola Houtekamer, Merijn Rengers and Nalinee Maleeyakul for NRC on April 23, 2023
As billions flow into robotics, researchers who conducted the study are concerned about the effects this might have on society.
By Pranshu Verma for Washington Post on July 16, 2022
Tech pundits presume artificial intelligence is something you either conquer or succumb to. But they’re looking at it all wrong.
By Andrea Grimes for Dame Magazine on April 11, 2023
It’s become increasingly difficult to know when your secrets are safe.
By Alejandra Caraballo for Slate Magazine on February 24, 2022
In 2019, former UN Special Rapporteur Philip Alston said he was worried we were “stumbling zombie-like into a digital welfare dystopia.” He had been researching how government agencies around the world were turning to automated decision-making systems (ADS) to cut costs, increase efficiency and target resources. ADS are technical systems designed to help or replace human decision-making using algorithms.
By Joanna Redden for Parental social licence for data linkage for service intervention on October 5, 2022
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.