The journalist and academic says the bias encoded in artificial intelligence systems can’t be fixed with better data alone – the change has to be societal.
By Meredith Broussard and Zoë Corbyn for The Guardian on March 26, 2023
The journalist and academic says the bias encoded in artificial intelligence systems can’t be fixed with better data alone – the change has to be societal.
By Meredith Broussard and Zoë Corbyn for The Guardian on March 26, 2023
This study builds upon work in algorithmic bias, and bias in healthcare. The use of AI-based diagnostic tools has been motivated by a shortage of radiologists globally, and research which shows that AI algorithms can match specialist performance (particularly in medical imaging). Yet, the topic of AI-driven underdiagnosis has been relatively unexplored.
Continue reading “Racist Technology in Action: The “underdiagnosis bias” in AI algorithms for health: Chest radiographs”The side effects of sleep deprivation are wreaking havoc on daytime life.
By Audrey Simango and Ray Mwareya for Rest of World on February 7, 2023
The Apple Watch fails to accurately measure the blood oxygen levels in people of color, according to a class action lawsuit filed Dec. 24 in New York federal court.
By Anne Bucher for Top Class Actions on December 29, 2022
Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person’s race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient’s racial identity from medical images.
By Ananth Reddy Bhimireddy, Ayis T Pyrros, Brandon J. Price, Chima Okechukwu, Haoran Zhang, Hari Trivedi, Imon Banerjee, John L Burns, Judy Wawira Gichoya, Laleh Seyyed-Kalantari, Lauren Oakden-Rayner, Leo Anthony Celi, Li-Ching Chen, Lyle J. Palmer, Marzyeh Ghassemi, Matthew P Lungren, Natalie Dullerud, Ramon Correa, Ryan Wang, Saptarshi Purkayastha, Shih-Cheng Huang Po-Chih Kuo and Zachary Zaiman for The Lancet on May 11, 2022
The development and use of AI and machine learning in healthcare is proliferating. A 2020 study has shown that chest X-ray datasets that are used to train diagnostic models are biased against certain racial, gender and socioeconomic groups.
Continue reading “Racist Technology in Action: Chest X-ray classifiers exhibit racial, gender and socio-economic bias”‘Oximeters’ are small medical devices used to measure levels of oxygen in someone’s blood. The oximeter can be clipped over someones finger and uses specific frequences of light beamed through the skin to measure the saturation of oxygen in the blood.
Continue reading “Racist Technology in Action: Oxygen meters designed for white skin”A ProPublica analysis found that traffic cameras in Chicago disproportionately ticket Black and Latino motorists. But city officials plan to stick with them — and other cities may adopt them too.
By Emily Hopkins and Melissa Sanchez for ProPublica on January 11, 2022
The Decolonising Digital Rights project is a collaborative design process to build a decolonising programme for the European digital rights field. Together, 30 participants are working to envision and build toward a decolonised field. This blog post charts the progress, learnings and challenges of the process so far.
By Laurence Meyer for Digital Freedom Fund on December 27, 2021
Health secretary signs up to hi-tech schemes countering health disparities and reflecting minority ethnic groups’ data.
By Andrew Gregory for The Guardian on October 20, 2021
Vox host Joss Fong wanted to know… “Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?”
Continue reading “Are we automating racism?”She employs AI to get to the roots of health disparities across race, gender, and class.
By Neel V. Patel for MIT Technology Review on June 30, 2021
The CBS, the Dutch national statistics authority, issued a report in March showing that someone’s social economic status is a clear risk factor for dying of Covid-19. In an insightful piece, researchers Linnet Taylor and Tineke Broer criticise this report and show that the way in which the CBS collects and aggragates data on Covid-19 cases and deaths obfuscates the full extent of racialised or ethnic inequality in the impact of the pandemic.
Continue reading “Covid-19 data: making racialised inequality in the Netherlands invisible”All over the world, in the countries hardest hit by Covid-19, there is clear evidence that marginalised groups are suffering the worst impacts of the disease. This plays out differently in different countries: for instance in the US, there are substantial differences in mortality rates by race and ethnicity. Israelis have a substantially lower death rate from Covid-19 than Palestinians in the West Bank or Gaza. In Brazil, being of mixed ancestry is the second most important risk factor, after age, for dying of Covid-19. These racial and ethnic (and related) differences appear also to be present in the Netherlands, but have been effectively rendered politically invisible by the national public health authority’s refusal to report on it.
By Linnet Taylor and Tineke Broer for Global Data Justice on June 17, 2021
Successful and ethical artificial intelligence programs take into account behind-the-scenes ‘repair work’ and ‘ghost workers.’
By Sara Brown for MIT Sloan on May 4, 2021
Evidence suggests that there has been an uptick in the amount of prejudice against East Asia during the COVID-19 pandemic.
From The Alan Turing Institute on May 8, 2020
In 2015, when T.J. Fitzpatrick visited a conference in Atlanta, he wasn’t able to use any of the soap dispensers in the bathroom.
Continue reading “Racist technology in action: White only soap dispensers”According to data from The Markup’s Citizen Browser project, there are major disparities in who is shown public health information about the pandemic.
By Corin Faife and Dara Kerr for The Markup on March 4, 2021
GitHub is where people build software. More than 56 million people use GitHub to discover, fork, and contribute to over 100 million projects.
By Klint Finley for GitHub on February 18, 2021
As many of us had our attention focused on the use of biometric surveillance technologies in managing the COVID-19 pandemic, in a new UN report prof. E. Tendayi Achiume forcefully puts the spotlight on the racial and discriminatory dimension of biometric surveillance technology in border enforcement.
Continue reading “Racism and “Smart Borders””A Google service that automatically labels images produced starkly different results depending on skin tone on a given image. The company fixed the issue, but the problem is likely much broader.
By Nicolas Kayser-Bril for AlgorithmWatch on April 7, 2020
The university hospital blamed a “very complex algorithm” for its unequal vaccine distribution plan. Here’s what went wrong.
By Eileen Guo and Karen Hao for MIT Technology Review on December 21, 2020
The Oxford Internet Institute hosts Lisa Nakamura, Director Digital Studies Institute, Gwendolyn Calvert Baker Collegiate Professor, Department of American Culture, University of Michigan, Ann Arbor. Professor Nakamura is the founding Director of the Digital Studies Institute at the University of Michigan, and a writer focusing on digital media, race, and gender. ‘We are living in an open-ended crisis with two faces: unexpected accelerated digital adoption and an impassioned and invigorated racial justice movement. These two vast and overlapping cultural transitions require new inquiry into the entangled and intensified dialogue between race and digital technology after COVID. My project analyzes digital racial practices on Facebook, Twitter, Zoom, and TikTok while we are in the midst of a technological and racialized cultural breaking point, both to speak from within the crisis and to leave a record for those who come after us. How to Understand Digital Racism After COVID-19 contains three parts: Methods, Objects, and Making, designed to provide humanists and critical social scientists from diverse disciplines or experience levels with pragmatic and easy to use tools and methods for accelerated critical analyses of the digital racial pandemic.’
From YouTube on November 12, 2020
Special rapporteur on racism and xenophobia believes there is a misconception that biosurveillance technology is without bias.
By Katy Fallon for The Guardian on November 11, 2020
The U.S. health care system uses commercial algorithms to guide health decisions. Obermeyer et al. find evidence of racial bias in one widely used algorithm, such that Black patients assigned the same level of risk by the algorithm are sicker than White patients (see the Perspective by Benjamin). The authors estimated that this racial bias reduces the number of Black patients identified for extra care by more than half. Bias occurs because the algorithm uses health costs as a proxy for health needs. Less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients. Reformulating the algorithm so that it no longer uses costs as a proxy for needs eliminates the racial bias in predicting who needs extra care.
By Brian Powers, Christine Vogeli, Sendhil Mullainathan and Ziad Obermeyer for Science on October 25, 2019
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.