AI is nog lang geen wondermiddel – zeker niet in het ziekenhuis

Tumoren ontdekken, nieuwe medicijnen ontwikkelen – beloftes genoeg over wat kunstmatige intelligentie kan betekenen voor de medische wereld. Maar voordat je zulk belangrijk werk kunt overlaten aan technologie, moet je precies snappen hoe die werkt. En zover zijn we nog lang niet.

By Maurits Martijn for De Correspondent on November 6, 2023

Racist Technology in Action: The “underdiagnosis bias” in AI algorithms for health: Chest radiographs

This study builds upon work in algorithmic bias, and bias in healthcare. The use of AI-based diagnostic tools has been motivated by a shortage of radiologists globally, and research which shows that AI algorithms can match specialist performance (particularly in medical imaging). Yet, the topic of AI-driven underdiagnosis has been relatively unexplored.

Continue reading “Racist Technology in Action: The “underdiagnosis bias” in AI algorithms for health: Chest radiographs”

AI recognition of patient race in medical imaging: a modelling study

Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person’s race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient’s racial identity from medical images.

By Ananth Reddy Bhimireddy, Ayis T Pyrros, Brandon J. Price, Chima Okechukwu, Haoran Zhang, Hari Trivedi, Imon Banerjee, John L Burns, Judy Wawira Gichoya, Laleh Seyyed-Kalantari, Lauren Oakden-Rayner, Leo Anthony Celi, Li-Ching Chen, Lyle J. Palmer, Marzyeh Ghassemi, Matthew P Lungren, Natalie Dullerud, Ramon Correa, Ryan Wang, Saptarshi Purkayastha, Shih-Cheng Huang Po-Chih Kuo and Zachary Zaiman for The Lancet on May 11, 2022

Decolonising Digital Rights: The Challenge of Centring Relations and Trust

The Decolonising Digital Rights project is a collaborative design process to build a decolonising programme for the European digital rights field. Together, 30 participants are working to envision and build toward a decolonised field. This blog post charts the progress, learnings and challenges of the process so far.

By Laurence Meyer for Digital Freedom Fund on December 27, 2021

Emma Pierson

She employs AI to get to the roots of health disparities across race, gender, and class.

By Neel V. Patel for MIT Technology Review on June 30, 2021

Covid-19 data: making racialised inequality in the Netherlands invisible

The CBS, the Dutch national statistics authority, issued a report in March showing that someone’s social economic status is a clear risk factor for dying of Covid-19. In an insightful piece, researchers Linnet Taylor and Tineke Broer criticise this report and show that the way in which the CBS collects and aggragates data on Covid-19 cases and deaths obfuscates the full extent of racialised or ethnic inequality in the impact of the pandemic.

Continue reading “Covid-19 data: making racialised inequality in the Netherlands invisible”

Now you see it, now you don’t: how the Dutch Covid-19 data gap makes ethnic and racialised inequality invisible

All over the world, in the countries hardest hit by Covid-19, there is clear evidence that marginalised groups are suffering the worst impacts of the disease. This plays out differently in different countries: for instance in the US, there are substantial differences in mortality rates by race and ethnicity. Israelis have a substantially lower death rate from Covid-19 than Palestinians in the West Bank or Gaza. In Brazil, being of mixed ancestry is the second most important risk factor, after age, for dying of Covid-19. These racial and ethnic (and related) differences appear also to be present in the Netherlands, but have been effectively rendered politically invisible by the national public health authority’s refusal to report on it.

By Linnet Taylor and Tineke Broer for Global Data Justice on June 17, 2021

Understanding Digital Racism After COVID-19

The Oxford Internet Institute hosts Lisa Nakamura, Director Digital Studies Institute, Gwendolyn Calvert Baker Collegiate Professor, Department of American Culture, University of Michigan, Ann Arbor. Professor Nakamura is the founding Director of the Digital Studies Institute at the University of Michigan, and a writer focusing on digital media, race, and gender. ‘We are living in an open-ended crisis with two faces: unexpected accelerated digital adoption and an impassioned and invigorated racial justice movement. These two vast and overlapping cultural transitions require new inquiry into the entangled and intensified dialogue between race and digital technology after COVID. My project analyzes digital racial practices on Facebook, Twitter, Zoom, and TikTok while we are in the midst of a technological and racialized cultural breaking point, both to speak from within the crisis and to leave a record for those who come after us. How to Understand Digital Racism After COVID-19 contains three parts: Methods, Objects, and Making, designed to provide humanists and critical social scientists from diverse disciplines or experience levels with pragmatic and easy to use tools and methods for accelerated critical analyses of the digital racial pandemic.’

From YouTube on November 12, 2020

Dissecting racial bias in an algorithm used to manage the health of populations

The U.S. health care system uses commercial algorithms to guide health decisions. Obermeyer et al. find evidence of racial bias in one widely used algorithm, such that Black patients assigned the same level of risk by the algorithm are sicker than White patients (see the Perspective by Benjamin). The authors estimated that this racial bias reduces the number of Black patients identified for extra care by more than half. Bias occurs because the algorithm uses health costs as a proxy for health needs. Less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients. Reformulating the algorithm so that it no longer uses costs as a proxy for needs eliminates the racial bias in predicting who needs extra care.

By Brian Powers, Christine Vogeli, Sendhil Mullainathan and Ziad Obermeyer for Science on October 25, 2019

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑