Workday Inc.’s Artificial Intelligence Allegedly Discriminates Against Blacks

Workday Inc.’s Artificial Intelligence Allegedly Discriminates Against Blacks


Workday Inc.’s artificial intelligence screening tools allegedly exclude Black applicants. 

Bloomberg Law reported that Derek Mobley, who filed a new lawsuit, will represent applicants who have allegedly been discriminated against. The lawsuit claims that Workday also discriminates against people over 40 years of age, as well as those with disabilities. 

The lawsuit also claims Mobely, a Black man over 40, has been applying since 2018 and has been continuously denied. The lawsuit also alleges that Mobley applied to 80-100 positions at companies that use Workday as a screening tool. 

“The Representative Plaintiff seeks …injunctive relief which reforms Workday’s screening products, policies, practices, and procedures so that the Representative Plaintiff and the class members will be able to compete fairly in the future for jobs and enjoy terms and conditions of employment traditionally afforded similarly situated employees outside of the protected categories,” the lawsuit read. 

According to the lawsuit, Workday’s screening shows “a pattern and practice of discrimination.” 

A spokesperson for Workday responded by saying that the company is “committed to trustworthy AI.”

 “We engage in a risk-based review process throughout our product lifecycle to help mitigate any unintended consequences, as well as extensive legal reviews to help ensure compliance with regulations,” the spokesperson said.

Ruha Benjamin, a professor of African American Studies at Princeton University, has done brilliant research on science, technology, and medicine. 

During a discussion with Thoughtspot, Benjamin spoke of bias in AI and data. 

“These systems [rely] on historic data, historic forms of decision-making practices that then get fed into the algorithms to train them how to make decisions. And so if we acknowledge that part of that historic data and those patterns of decision-making have been discriminatory, it means that both the data and oftentimes the models, which are built to make important, sometimes life-and-death decisions for people, are being reproduced under the guise of objectivity. ” 

Benjamin continued, “The main danger of it, perhaps more dangerous than human bias, is that we assume that the technology is neutral and objective. So we don’t question it as much — not as much as we would [with] a racist judge or doctor or police. We think, ‘Oh, it’s coming to us through a computer screen. Okay. I give these resources to this patient and not this patient.’ And [we] don’t question where that decision came from.”


×