HBCU Grad, Black Senior Engineer at IBM Says That AI Empowers and Deepens Systemic Racism

HBCU Grad, Black Senior Engineer at IBM Says That AI Empowers and Deepens Systemic Racism


Calvin D. Lawrence, a graduate of Clark Atlanta University, is a Distinguished Engineer who has been working at IBM for the past 25 years. In his new book, Hidden in White Sight: How Artificial Intelligence Empowers and Deepens Systemic Racism, he reveals startling evidence of the technology used by policing and judicial systems that contain in-built biases stemming from human prejudices and systemic or institutional preferences.

Lawrence attests that there are steps that AI developers and technologists can do to redress the balance. However, a growing mountain of evidence suggests that the AI used by these organizations can entrench systemic racism. This can negatively impact Black and ethnic minority groups when applying for a mortgage or seeking healthcare, according to an industry expert. Hidden In White Sight explores the remarkable breadth of AI use in the United States, Asia, and Europe, from healthcare services, policy, advertising, banking, education, and applying for and getting loans. The sobering reality is that AI outcomes can restrict those most in need of these services.

“Artificial Intelligence was meant to be the great social equalizer that helps promote fairness by removing human bias,” Lawrence writes. “But in fact, I have found in my research and in my own life that this is far from the case.” According to his research, Lawrence has found that bias in AI affects a range of vital industries that pervades all aspects of modern society – yet these biases are rarely confronted.

Lawrence has been designing and developing software for the last thirty years, working on many AI-based systems at the US Army, NASA, Sun Microsystems, and IBM. With his expertise and experience, he advises readers on what they can do to fight against it and how developers and technologists can build fairer systems.

These recommendations include rigorous quality testing of AI systems, full transparency of datasets, viable opt-outs, and in-built ‘right to be forgotten’. He also advocates for why people should be able to easily check what data is held against their names, and be given clear access to recourse if the data is inaccurate.

“This is not a problem that just affects one group of people, this is a societal issue,” Lawrence writes. “It is about who we want to be as a society and whether we want to be in control of technology, or whether we want it to control us. I would urge anyone who has a seat at the table, whether you’re a CEO or tech developer or somebody who uses AI in your daily life, to be intentional with how you use this powerful tool.”

For more details and/or to purchase the book, visit HiddeninWhiteSight.com


×