Research Shows Link Between Racial Bias and AI Technology In Major Industries

Research Shows Link Between Racial Bias and AI Technology In Major Industries


AI technology has become a polarizing topic in recent years for a litany of reasons. Chief among them is how implicit bias may play a role in programming its data, according to research conducted by MIT scientist Joy Buolamwini.

Buolamwini describes the potentially harmful affects as the “coded gaze” that can lead to discriminatory or exclusionary practices as industries like healthcare, law enforcement and big tech, begin to lean on AI technology more regularly.

“We often assume machines are neutral, but they aren’t,” the scientist wrote in TIME, stating that her “research uncovered large gender and racial bias in AI systems sold by tech giants like IBM, Microsoft, and Amazon.”

Buolawmini found that the likelihood of errors when identifying dark-skinned females soared to over 35% compared to only a 1% error rate for lighter-skinned men; turning technology that is promoted as being able to rectify bias into another conduit for discrimination.

Buolawmini’s findings are not the only of its kind to find a dangerous link between AI technology and racist, sexist and gender-based biases.

A study from Harvard showed that facial recognition software used by law enforcement collected its data from a mugshot directory in which Black people were disproportionately overrepresented. Most people who end up in the database do so involuntarily; leaving them without protection in terms of where their likenesses end up and for what means.

“Even if accurate, face recognition empowers a law enforcement system with a long history of racist and anti-activist surveillance and can widen pre-existing inequalities,” said Alex Najibi, whose research guided the study.

Najibi believes that changes can be made to make for a more equitable landscape for facial recognition and other AI software; starting with a focus on adjusting the pool for the collected data.

“First, algorithms can train on diverse and representative datasets, as standard training databases are predominantly White and male,” he said.

The other solution?: Lawmakers ensuring that policies are put in place to protect those most at-risk of discriminatory practices worsened by AI.

“Legislation can monitor the use of face recognition technology, as even if face recognition algorithms are made perfectly accurate, their contributions to mass surveillance and selective deployment against racial minorities must be curtailed”, Najibi said.

RELATED CONTENTSocial Media Roasts Timbaland’s ‘Weird’ AI-Generated Collab With The Notorious B.I.G.


×