A very clear example of racist technology was exposed by Emmanuel Martinez and Lauren Kirchner in an article for the Markup. Algorithms used by a variety of American banks and lenders to automatically assess or advice on mortgages display clear racial disparity. In national data from the United States in 2019 they found that “loan applicants of color were 40%–80% more likely to be denied than their White counterparts. In certain metro areas, the disparity was greater than 250%.”
In previous iterations of this type of research, lenders criticised the data for not including crucial metrics that would make the racial disparities go away. However, much of these specific metrics (such as US credit scores or debt ratio) are kept secret and unavailable for scrutiny.
One explanation for the racial disparity in the data is the use of outdated models that rely on specific metrics that fuction as proxies for race. As described in the article by Aracely Panameño, director of Latino affairs for the Center for Responsible Lending, the racism embedded in these automated lending algorithms is directly connected to the data it is trained on: “The quality of the data that you’re putting into the underwriting algorithm is crucial […] If the data that you’re putting in is based on historical discrimination, then you’re basically cementing the discrimination at the other end.” These classic problems are exacerbated by the specific US situation where there is insufficient regulation and mandated transparancy of the algorithms used.