How will it impact the public sector?
After rolling out its Executive Order on the use of AI in October of 2023, the Biden administration announced this week that on December 1, 2024 they are implementing new policy for the US government governing its use of artificial intelligence (AI) to encourage responsible adoption of the AI by the US government.
This move is being made in an effort to prevent US government abuse of AI. Vice President Kamala Harris indicated that the governing AI framework will cover preventive measures for US agencies intended to prevent AI from being used in discriminatory ways.
The new policy is coming from the Office of Management and Budget (OMB). It looks to mitigate the potential threats that can come from AI such as intellectual privacy violations, ethics, and discrimination while also increasing transparency of the governments use of AI. Additionally, it includes direction for all federal agencies to designate a chief AI officer to oversee how each agency uses the technology. One of the main responsibilities of the AI officer role will be to ensure compliance and coordinate the use of AI across government entities.
“Guardrails on how the US government uses AI can help make public services more effective”, said OMB Director Shalanda Young, adding that “These new requirements will be supported by greater transparency,” Young said, highlighting the agency reporting requirements. “AI presents not only risks, but also tremendous opportunity to improve public services and make progress on societal challenges like addressing climate change, improving public health and advancing equitable economic opportunity.”
According to the government fact sheet, the mandates aim to cover situations ranging from screenings by the Transportation Security Administration to decisions by other agencies affecting Americans’ health care, employment, and housing.
Here are a few examples of how the guardrails for publics services may work:
- Agencies can give travelers the ability to opt out of Transportation Security Administration facial recognition at airports.
- Increase prevention of bias and disparities in health care
- Allow human oversight when using AI to root out fraud in government services.
- Waivers could be given for software that doesn’t comply with administration rules but must include a published justification.
- If an individual believes AI has led to false information or decisions about them, they will be able to seek remedies.
The fact sheet also notes that the Biden administration intends to hire at least 100 employees with a focus on artificial intelligence by this summer.
The policy includes verification guidelines for agencies using AI tools. They will have to verify they do not endanger the rights and safety of the American people. They will also be accountable for publishing a complete list of the AI systems it uses and their reasons for using them, online, along with a complete risk assessment of those systems.
“Leaders from governments, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm, while ensuring everyone is able to enjoy its full benefit,” Harris stated.
One potential implication of the new policy is the indirect effect it could have on the AI industry at large given that the government uses a lot of AI technology born out of the public sector. It was noted during the new policy announcement that the OMB will be taking additional action to regulate federal contracts involving AI.
Similar to what the EU AI Act has outlined, the new policy will likely have a more rigorous set of tests as the potential for risk escalates with each application of AI.
Those following the regulatory actions on AI know that this policy is just another point of policy evolution when it comes to AI. Congress first passed legislation in 2020 directing OMB to publish its guidelines for agencies by the following year but only issued a draft of its policies two years later, in November 2023, in response to the Biden executive order.
Vice President Kamala Harris stated.“President Biden and I intend that these domestic policies will serve as a model for global action,” but policy changes in the US traditionally move at a snail’s pace. Meanwhile, as mentioned in my last blog, the EU has put a strong stake in the ground for AI governance. So, this is all good for the US governments internal use and may have an impact on how the executive order on the use of AI evolves over time but the true test for the public sector will be in the long run, getting US Congress to pass new legislation.