Privacy: Ondanks de toeslagenaffaire blijft de overheid dubieuze algoritmes gebruiken, ziet Dagmar Oudshoorn. Tijd voor een toezichthouder.
By Dagmar Oudshoorn for NRC on October 14, 2020
Privacy: Ondanks de toeslagenaffaire blijft de overheid dubieuze algoritmes gebruiken, ziet Dagmar Oudshoorn. Tijd voor een toezichthouder.
By Dagmar Oudshoorn for NRC on October 14, 2020
Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with. Data is frequently imperfect in ways that allow these algorithms to inherit the prejudices of prior decision makers. In other cases, data may simply […]
By Andrew D. Selbst and Solon Barocas for California Law Review on June 1, 2016
Zoals de dood van George Floyd leidde tot wereldwijde protesten, zo deed de vooringenomen beeldverwerkingstechnologie PULSE dat in de wetenschappelijke wereld. Er werd opgeroepen tot een verbod, maar neuro-informaticus Sennay Ghebreab vraagt zich af of een digitale beeldenstorm het probleem oplost.
By Sennay Ghebreab for Vrij Nederland on October 5, 2020
Predictive and decision-making algorithms are playing an increasingly prominent role in our lives. They help determine what ads we see on social media, where police are deployed, who will be given a loan or a job, and whether someone will be released on bail or granted parole. Part of this is due to the recent rise of machine learning. But some algorithms are relatively simple and don’t involve any AI or ‘deep learning.’
By Brian Hedden for Kevin Dorst
Twitter lijkt witte mensen op foto’s eerder uit te lichten dan zwarte mensen, bleek deze week uit tests van gebruikers. Media schreven over “racistische algoritmes”, maar kunnen we dat wel zo noemen? En hoe ontstaat discriminatie in computersystemen?
By Rutger Otto for NU.nl on September 25, 2020
Welke lessen over privacy kunnen we nu trekken uit de aanslag op het Amsterdamse bevolkingsregister in 1943? ‘Vanuit een gebrek aan vrijheid krijg je een helderder perspectief op wat vrijheid betekent.’
By Hans de Zwart for De Correspondent on May 8, 2014
I have volunteered to be a guest speaker in classes this Fall. It’s really the least I can do to help teachers and students through another tough term. I spoke tonight in Dorothy Kim’s class “Race Before Race: Premodern Critical Race Studies.” Here’s a bit of what I said…
By Audrey Watters for Hack Education on September 3, 2020
Critics say it merely techwashes injustice.
By Annie Gilbertson for The Markup on August 20, 2020
The UK government has said that students in England and Wales will no longer receive exam results based on a controversial algorithm. The system developed by exam regulator Ofqual was accused of being biased.
By Jon Porter for The Verge on August 17, 2020
Analysis also shows pupils at private schools benefited most from algorithm.
By Niamh McIntyre and Richard Adams for The Guardian on August 13, 2020
Disadvantaged students among those more likely to have received lower grades than predicted.
By Cath Levett, Niamh McIntyre, Pamela Duncan and Rhi Storer for The Guardian on August 13, 2020
Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On.
By Amanda Askell, Annette Zimmermann, C. Thi Nguyen, Carlos Montemayor, David Chalmers, GPT-3, Henry Shevlin, Justin Khoo, Regina Rini and Shannon Vallor for Daily Nous on July 30, 2020
#IwanttoseeNyome outcry after social media platform repeatedly removes pictures of Nyome Nicholas-Williams.
By Nosheen Iqbal for The Guardian on August 9, 2020
Companies like Netflix, Facebook, and Uber deploy algorithms in search of greater efficiency. But when used to evaluate the powerful systems that judge us, algorithms can spur social progress in ways nothing else can.
By Noam Cohen for WIRED on October 25, 2018
The U.S. health care system uses commercial algorithms to guide health decisions. Obermeyer et al. find evidence of racial bias in one widely used algorithm, such that Black patients assigned the same level of risk by the algorithm are sicker than White patients (see the Perspective by Benjamin). The authors estimated that this racial bias reduces the number of Black patients identified for extra care by more than half. Bias occurs because the algorithm uses health costs as a proxy for health needs. Less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients. Reformulating the algorithm so that it no longer uses costs as a proxy for needs eliminates the racial bias in predicting who needs extra care.
By Brian Powers, Christine Vogeli, Sendhil Mullainathan and Ziad Obermeyer for Science on October 25, 2019
Artificial intelligence can amplify racism, sexism, and other forms of discrimination. We deserve more accountable and equitable AI.
From Algorithmic Justice League
Searching Google’s ad buying portal for “Black girls” returned hundreds of terms leading to “adult content”
By Aaron Sankin and Leon Yin for The Markup on July 23, 2020
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.