How Artificial Intelligence Exacerbates the Systematic Biases of the Real World
How Artificial Intelligence Exacerbates the Systematic Biases of the Real World
Artificial Intelligence uses a multitude of tools, including algorithms and statistical analysis, to aid in clinical decision making and to pick up things that humans may miss. Modern AI is still fairly new, but it is fast becoming one of the biggest markets in the world, expected to gross £360 billion by 2024. However, while strong investment is backing these machines, machine learning systems still present obstacles, including bias, gender, ethnic and physical discrimination.

Applications of Artificial Intelligence. Image by Shil Karen via Flickr
The controversial firing and silencing of the prominent Black female computer scientist Dr Timnit Gebru from Google last December propelled discussions once again of the importance of diversity to combat ethical issues in computer science. Still, the tech industry is dominated by able-bodied white males, according to a 2020 survey. Just over 3% of professional developer jobs are held by Black people worldwide, and only 1 in 5 Women work in data science and AI in the UK. This means that the data used to inform machine learning systems are abundantly Eurocentric and thus lighter-skinned and male. Leading AI expert, Joy Buolamwini, attributes this narrow scope applied to robotics research to the under-sampled majority; this refers to a failure to incorporate most of the global population, consisting of Black and brown people, along with Women of all races.
“We have seen skewed biometrics in surveillance systems and facial recognition technology that misidentifies Black and brown people leading to false arrests”
At first, the problems were more notably virtual. Last September, users highlighted Twitter’s biased algorithmic tool, which failed to display those with darker skin alongside their lighter companions in the same picture when sharing posts. In addition, Buolamwini found that facial recognition software by conglomerate tech businesses misidentifies prominent Black figures. Buolamwini tested the facial recognition software of IBM and Google, among others, finding that the highest-ranking descriptors identified famous Women like Oprah as ‘male’ and described Michelle Obama’s hair as a ‘hairpiece’.
These current issues in machine learning prove to have a substantial impact when translating into real life. More recent assessments of thinking machines unveil the consequences of their inexperience when interacting with the global majority. We have seen skewed biometrics in surveillance systems and facial recognition technology that misidentifies Black and brown people leading to false arrests. Furthermore, the tangible effects of robotics-derived mistakes include less accurate diagnosis in healthcare, prevention of access to resources and potentially critical accidents.
The healthcare sector is becoming more reliant on AI, with the government providing a £36 billion boost to it in the NHS. However, it is failing darker-skinned people with discriminatory algorithms that do worse at detecting the severity of problems on darker skin. Machine learning programmes reported higher proportions of false-negative results when detecting melanomas in darker skin, leaving those with a potentially more serious status at increased risk. As a matter of under-sampling and bias, these current systems deliver a further blow to health care inequalities by underrepresenting health conditions. This is amidst the already disproportionate impacts of COVID-19 for Black and brown people.

Facial recognition of former First Lady, Michelle Obama. Credit: Joy Buolamwini, Algorithmic Justice League
“Because Black people spend less on healthcare in the US, the data revealed that Black patients had to be in a worse condition than white people to receive additional help”
Recently, a healthcare computer software programme (responsible for making patient referrals) was discovered to have initiated fewer referrals for Black people compared to their white counterparts, who were equally as sick. The software assessed the severity of patients’ health conditions with healthcare costs. Because Black people spend less on healthcare in the US, the data revealed that Black patients had to be in a worse condition than white people to receive additional help. This exacerbates the disparity in the care given between Black and white patients with the same chronic illnesses. Machines are making literal life-changing decisions, in this case for worse.
We also face restricted access to services because of some of these AI tools. Recruitment platforms such as HireVue use AI to match people’s skills and experience to job roles for employers. HireVue’s software allows candidates to record answers to questions on a video interview and assesses their performance based on body language, tone of voice, among other traits. However, it has been criticised for discriminating against disabled people such as those with down’s syndrome who do not make eye contact in the same way. They have more recently removed their facial recognition technology to combat this. This presents quite the irony considering these systems were incorporated to dispel bias in human decision making and expand diversity pools.
“Self-driving cars are still being contested for the shocking fact that they are more likely to hit darker-skinned people and Women in the dark”
To take another aspect of modern life, current AI models are presenting issues of safety. We may look no further than the automated vehicles of the future. Self-driving cars are still being contested for the shocking fact that they are more likely to hit darker-skinned people and Women in the dark. A 2019 study found that lighter skin was accurately detected 74.6% of the time compared to 62.1% detection for darker-skinned individuals. The tested built-in detection systems fuelled by AI were largely created by young, white, able-bodied men. These systems identify pedestrians based on leg movements, thus proving harder to detect those wearing skirts or the disabled. Detection in poorer lighting conditions also means that darker-skinned people face a higher chance of getting hit. The Law Commission’s final report on the matter is due to release later this year.
We live in a society that is heavily reliant on and widely dominated by technology. Selective sample sizes consistently prioritising lighter-skinned, abled-bodied men in research means that systematic prejudices will likely continue to be applied to computer programmes. This is widening the security, health, economic and safety parity and only tightening the narrow gaps that marginalised groups exist within. The consequences of narrowly scoped data sets for the under-sampled majority could instead bring greater harm than good. Only when AI systems are equitably engaging with human reality can we better rely on such assistance. This is made effective by incorporating more diversity in research, which requires a broader range of skin tones, facial features and differently abled people.
Written By: Lauren Johnson – a freelance writer with a profound interest in culture and behaviour. Her work focuses on culture, lifestyle and the state of society. Connect with her on Instagram and her blog
Header Image: iStock/metamorworks/Canadian Medical Association Journal