トランプ2.0下のAI政策が意味すること
President-elect Donald Trump has indicated that he intends to significantly alter the United States' artificial intelligence (AI) policy as he prepares for his second term. Although the exact plan of action is unclear, the new Trump administration has declared that it will be scaling back federal oversight structures established under Biden's administration. In particular:
- Repeal of Biden's AI Executive Order: Biden’s executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence was signed in October 2023. It is based on the October 2022, Blueprint for an AI Bill of Rights, and focuses on responsible AI development, implementing safety and privacy standards, and promoting ethical use. Among other things, it called on developers to report on their models’ training and safety measures, and mandated that the National Institute of Standards and Technology (NIST) establish guidelines for detecting and mitigating model flaws including biases. NIST has since released AI guidance documents on risk management, secure software development, synthetic content watermarking, and preventing model abuse, in addition to launching multiple initiatives to promote model testing.
- Potential Elimination of the AI Safety Institute (AISI): The U.S. AI Safety Institute (AISI), was established under Biden’s administration within the Department of Commerce’s National Institute of Standards and Technology (NIST). The AISI was created to exclusively study AI risks and create rigorous standards to test the safety of AI models for public use. With the future of AISI currently in doubt, “there is a push from some tech industry players and think tanks to make the AISI permanent before Trump takes office” according to R Street Institute Resident Senior Fellow of Technology and Innovation Adam Thierer.
Why the pushback on Bidens Executive Order and what changes can we expect?
The Trump administration has declared a pro-innovation approach to AI and is seeking to reduce what they see as government over-reach in regulation and barriers to AI development and adoption.
There are two provisions in the Biden Executive Order that the Trump administration objects to:
- One provision specifies reporting requirements for how frontier model developers and cloud service providers should test and conduct risk assessments of their AI models, a practice known as "red teaming." The objection is that the testing and reporting process is too onerous, slows innovation and effectively forces companies to disclose their trade secrets.
- The other provision is the NIST guidelines on detecting model bias and algorithmic fairness to ensure that AI models are free of biases that could discriminate against certain protected categories such as sex, race, or age. The objection here is that the guidelines are politically biased and effectively introduce government censorship. The basis and approach to detecting algorithmic discrimination is a big part of this discussion.
It is interesting to note that during his first term, Trump had signed the first ever executive order on AI called the American AI Initiative, and later launched the national AI research institute and directed federal agencies to prioritize AI research and development, focusing on civil liberties, privacy and values like trustworthiness in AI applications. As such, it is unlikely we will see a wholesale retrenchment of the NIST guidelines. Perhaps there will be tighter focus on removing possible political leanings in AI models and maybe a re-evaluation/loosening of NISTs guidelines on algorithmic or social discrimination.
The possible scrapping of the reporting regime for frontier model developers and cloud service providers has also raised concerns. Per the Brookings Institute: “These requirements are one of the only legally binding transparency mechanisms for companies that develop or provide resources for the most powerful, and potentially dangerous, AI systems, and their removal would be a huge blow to the movement for guardrails on frontier AI.”
Here again, the picture isn’t entirely clear, especially with the increasing influence of Elon Musk within the Trump administration.
The Elon Factor
Despite Trump's stance on reporting requirements for frontier AI vendors he will need to address AI safety issues and the need for guardrails to mitigate potential catastrophic risks from frontier AI systems. In replacing or rebranding the Biden Executive Order, he will need to retain and extend AI policies that address the increased threats that a rapidly evolving AI technology landscape presents to humanity.
It seems likely that influential figures such as Elon Musk will play a role in shaping the Trump administration's AI policies. While the Trump administration favors fewer regulations, Musk's standing in the administration and advocacy for safety standards could lead to a more nuanced approach, balancing technological acceleration and deregulation with necessary safety and ethical oversight. Musk has in the past called for the establishment of a regulatory body to oversee AI, ensuring it does not present a danger to the public. This sounds very much like the AISI institute that is currently at risk of being eliminated.
Musk has repeatedly warned about the existential risks posed by unchecked AI. During the UKAI Safety Summit, Musk emphasized that while AI regulation might be "annoying," it is necessary, likening it to having a "referee" to reduce the threat to mankind. He was also one of the few technology executives to support California's vetoed AI safety bill (SB 1047) that required large-scale AI models to undergo safety testing. His perspective was that AI should be regulated like any other technology that poses a risk to the public. Hopefully, his viewpoint will be heard as the Trump administration look to formulate a revised AI policy next year.
So, what will actually change under Trump 2.0? It is hard to tell right now. However, one thing is for sure, with a reduced focus on AI oversight at a federal level, we will see a rise in state level legislation to address AI regulation gaps.
State Regulation - Kicking the can further down the road.
A big criticism of Biden’s Executive Order was that it ‘lacked teeth.’ It provided non-compulsory guidance and relied on organizations to voluntarily act on that guidance.
This has led to a proliferation of state-level regulation particularly in California, New York, and Colorado. State policymakers have introduced close to 700 pieces of AI legislation this year alone with dozens becoming law. This will most likely accelerate under Trumps more ‘hands-off’ approach.
Some examples of state-led AI legislation in 2024 include:
- In March, Tennessee passed a law protecting voice artists from AI cloning.
- In September, California Governor Gavin Newsom enacted over 17 laws covering specific generative AI uses and outcomes, including bills on deepfakes, AI watermarking, child safety, election misinformation, performers’ AI rights and training data transparency.
- Colorado enacted the first comprehensive AI law in May - the Colorado Artificial Intelligence Act (CAIA). The law adopts a risk-based approach to AI regulation and requires companies to inform people when an AI system is being used. While it will not allow individuals to sue over AI use, it sets up a process to investigate potential consequences for bad actors. Additionally, the Colorado Division of Insurance has issued a regulation to prevent life insurers from engaging in race-based discrimination using AI models that rely on external consumer data and information sources.
- Connecticut and Texas have considered similar policies aimed at preemptively regulating AI systems to address perceived “algorithmic discrimination.”
This proliferation of state-led AI regulation is a huge problem for organizations that operate across multiple jurisdictions. With each state introducing its own unique AI policy it becomes incredibly difficult to track and comply with all the different and sometimes conflicting state regulations.
What is needed is a national AI policy framework that removes the need for a patchwork of state-level policies.
We can but hope!