Racial discrimination in dynamic pricing algorithms is neither surprising nor new. VentureBeat writes about another recent study that supports these findings, in the context of dynamic pricing algorithms used by ride-hailing companies such as Uber, Lyft and other apps. Neighbourhoods that were poorer and with larger non-white populations were significantly associated with higher fare prices. A similar issue was discovered in Airbnb’s ‘Smart Pricing’ feature which aims to help hosts secure more bookings. It turned out to be detrimental to black hosts leading to greater social inequality (even if unintentional).
When machine learning is applied to social data, the algorithms pick up on the statistical regularities of problematic social biases and historical injustices that are embedded in the data sets. Not only do these algorithms perpetuate bias, they can further disadvantage certain populations. Importantly, while these may be unintended effects, the consequences of racially based disparities must still be accounted for. Identifying the extent of the harm, and fixing these problems remain a huge challenge, due to the often proprietary nature of these algorithms. It is clear that technology can unintentionally discriminate. However, leaving discrimination analysis to companies that do not have a clear track record makes it even more pressing to demand for better accountability mechanisms, including access to company data and algorithms.