Black Trump Supporters. AI

Photos Of Black Trump Supporters Are Fake News AF

Experts caution against the proliferation of digitally manipulated images and deepfake technology aimed at influencing Black voters, emphasizing the need for scrutiny and regulation.

In the latest example of Trump supporters engaging in a disinformation campaign, digitally manipulated images have proliferated across social media in a concerted attempt to get Black voters to vote for the likely Republican Party nominee, Donald Trump. 

As the BBC reports, even though there is nothing that ties the images directly to former President Donald Trump’s campaign, Cliff Albright, the co-founder of Black Voters Matter, a group encouraging Black people to participate in civics, says that they represent yet another attempt to sway voters using technology.

“There have been documented attempts to target disinformation to Black communities again, especially younger black voters,” Albright told the BBC. He also cautioned that the images were part of a “very strategic narrative” on the part of conservatives to influence the election. 

In 2016, when Trump won the presidential election, the worry was about foreign meddling by Russia via cyberattacks, but this time around experts caution that both foreign and domestic attacks could have an effect on the election’s outcome. 

This is mirrored by an anecdote in the BBC’s reporting, where it showed an image of Trump surrounded by Black voters created by artificial intelligence. Douglas, a Black taxi driver from Atlanta, was unable to discern that the image was fake and believed that Trump had large groups of Black supporters.

Once the ruse was revealed, Douglas remarked, “Well, that’s the thing about social media. It’s so easy to fool people.”

According to The Guardian, AI has already reared its head in this election cycle via a “deepfake” robocall using the voice of President Joe Biden to tell voters to stay at home. Lisa Gilbert, the executive vice president of Public Citizen, a group dedicated to federal and state regulation of AI in American politics, told the outlet that it doesn’t matter how many people were deceived, what ultimately matters is that the capability exists.

“I don’t think we need to wait to see how many people got deceived to understand that that was the point,” Gilbert said. “It could come from your family member or your neighbor and it would sound exactly like them.”

Gilbert continued, “The ability to deceive from AI has put the problem of mis- and disinformation on steroids.”

As NBC News reports, in the absence of federal policy on AI, some states are stepping in to fill the gap by proposing laws aimed at regulating AI by either banning or restricting how AI is used in political advertisements, but proposing such legislation does not mean that the bills will become law. If AI technology continues to outpace regulation, the 2024 presidential election could be largely contaminated by mis- and disinformation campaigns. 

As Thompson-Reuters points out, the lack of regulation in this key area leaves the political arena wide open to be exploited, as the “deepfake” robo call demonstrated. 

“Indeed, unregulated AI poses various risks, such as the spread of deep fake misinformation campaigns, extensive personal data collection, and employment disruption,” Allyson Brunette, Thompson-Reuters’ workplace consultant, wrote. “These potential harms fall under federal jurisdiction and encompass issues like unlawful discrimination in housing and employment, improper data collection practices, and harmful outcomes that endanger consumers.”

RELATED CONTENT: Al Sharpton Slams Donald Trump For Aligning Himself With African American Struggle