US President Joe Biden, Kamala Harris

The Biden Administration Issues Regulations On How Government Can Use AI

Good because AI can be dangerous!


The Biden Administration announced new guidelines on how federal agencies can and cannot use artificial intelligence (AI)

In a memo released by the Office of Management and Budget (OMB), the regulations implement a balance between mitigating risks and being open to advancement in innovation. Each agency will require appointing people to several roles, including a chief artificial intelligence officer and a senior role to oversee AI implementation. In addition to growing the AI workforce, close to 100 professionals will be hired during Summer 2024. 

An initial memo draft was introduced before Vice President Harris’ trip to the first global AI summit in the UK in the fall of 2023. It was then open for public commentary before the final version was released on Mar. 28. Harris described the regulations as “binding” and emphasized the need for the guidelines to prioritize public interest globally.

“President Biden and I intend that these domestic policies serve as a model for global action,” Harris said. 

“We will continue to call on all nations to follow our lead and put the public interest first when it comes to government’s use of AI.”

Agencies must implement safeguards by Dec. 1, including assessing, testing, and monitoring AI’s impacts. If not, they will have to stop using the technology unless approved as being necessary for agency functionality. As artificial intelligence can be used for harm, Shalanda Young, Director of the Office of Management and Budget, says it’s vital for Americans to trust the government’s use.

“The public deserves confidence that the federal government will use the technology responsibly,” Young said. 

Several government agencies already use AI, but the memo further explained how the technology will be utilized, including extreme weather forecasting, and control of the spread of disease and opioid use. 

The measure is a massive step towards guaranteeing safe AI practices, something that private companies and other countries are trying to get a hold of. In December 2023, according to Wired, the European Union voted to pass the AI Act, legislation that will control the creation and use of AI technologies. China is also said to be working on stricter AI regulations.

However, officials think there is more work to be done beyond adding guidelines. Alex Reeve Givens, president and CEO of the Center for Democracy and Technology, questioned exactly what the U.S. government’s testing requirements are and who has the expertise to greenlight them. “I see this as the first step. What’s going to come after is very detailed practice guides and expectations around what effective auditing looks like, for example,” Reeve Givens said. 

“There’s a lot more work to be done.”

Reeve Givens recommended that the administration release procurement processes and what requirements will be in place for companies with AI technology the government is eyeing to buy.


×