トランプ2.0下のAI政策が意味すること
President-elect Donald Trump has indicated that he intends to make significant changes to the United States' artificial intelligence (AI) policy as he prepares for his second term. Although the exact plan of action is unclear, the new Trump administration has declared that it will be scaling back federal oversight structures established under Biden's administration. In particular:
- Repeal of Biden's AI Executive Order:Biden’s executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence was signed in October 2023. It is based on the October 2022, Blueprint for an AI Bill of Rights, and focuses on responsible AI development, implementing safety and privacy standards, and promoting ethical use. Among other things, it called on developers to report on their models’ training and safety measures, and mandated that the National Institute of Standards and Technology (NIST) establish guidelines for detecting and mitigating model flaws including biases. NIST has since released AI guidance documents on risk management, secure software development, synthetic content watermarking, and preventing model abuse, in addition to launching multiple initiatives to promote model testing.
- Potential Elimination of the AI Safety Institute (AISI): The U.S. AI Safety Institute (AISI), was established under Biden’s administration within the Department of Commerce’s National Institute of Standards and Technology (NIST). The AISI was created to exclusively study AI risks and create rigorous standards to test the safety of AI models for public use. With the future of AISI currently in doubt, “there is a push from some tech industry players and think tanks to make the AISI permanent before Trump takes office” according to Adam Thierer, resident senior fellow of technology and innovation at R Street.
Why the pushback on Bidens Executive Order and what changes can we expect?
The Trump administration has declared a pro-innovation approach to AI and is seeking to reduce what they see as government over-reach in regulation and barriers to AI development and adoption.
There are two primary provisions in the Biden Executive Order that the Trump administration objects to:
- One provision specifies reporting requirements for how frontier model developers and cloud service providers should test and conduct risk assessments of their AI models, a practice known as "red teaming." The objection is that the testing and reporting process is too onerous, slows innovation and effectively forces companies to disclose their trade secrets.
- The other provision is the NIST guidelines on detecting model bias and algorithmic fairness to ensure that AI models are free of biases that could discriminate against certain protected categories such as sex, race, or age. The objections here are that the guidelines appear to be politically biased and that they effectively introduce government censorship. The basis and approach to detecting algorithmic discrimination is a big part of this discussion.
It is interesting to note that during his first term, Trump had signed the first ever executive order on AI called the American AI Initiative, and later launched the national AI research institute and directed federal agencies to prioritize AI research and development, focusing on civil liberties, privacy and values like trustworthiness in AI applications. As such, it is unlikely we will see a wholesale retrenchment of the NIST guidelines. What is more likely is that there will be a re-evaluation/loosening of NISTs guidelines on algorithmic or social discrimination.
The possible scrapping of the reporting regime for frontier model developers and cloud service providers has also raised concerns. Per the Brookings Institute: “These requirements are one of the only legally binding transparency mechanisms for companies that develop or provide resources for the most powerful, and potentially dangerous, AI systems, and their removal would be a huge blow to the movement for guardrails on frontier AI.”
Here again, the picture isn’t entirely clear, especially with the increasing influence of Elon Musk within the Trump administration.
The Elon Factor
Despite the Trump administration’s stance on repealing Biden’s reporting requirements for frontier AI vendors, they have acknowledged that frontier AI models have the potential to cause catastrophic harm and that these types of risks need to be managed.
It seems likely that Elon Musk will have some influence in shaping the Trump administration's AI policies. While the Trump administration favors fewer regulations, Musk's standing in the administration and advocacy for safety standards could lead to a more nuanced approach, balancing technological acceleration and deregulation with necessary safety and ethical oversight.
Musk has repeatedly warned about the existential risks posed by unchecked AI. During the UK AI Safety Summit last year, he emphasized that while AI regulation might be "annoying," it is necessary, likening it to having a "referee" to reduce the threat to mankind. He was also one of the few technology executives to support California's vetoed AI safety bill (SB 1047) that required large-scale AI models to undergo safety testing. His perspective was that AI should be regulated like any other technology that poses a risk to the public.
Musk has in the past called for the establishment of a regulatory body to oversee AI, ensuring it does not present a danger to the public. It is possible that the AI Safety Institute may be retained and positioned to play this role, but with a stronger focus on national security and specific public safety AI risks.
Increased State Regulation
Over the last 2 years there has been a noticeable acceleration in state level AI regulation. Citing an absence of federal legislation to address fast-changing developments in AI technology, states such as California, New York, and Colorado have been leading the charge in establishing their own regulatory AI frameworks. These states have introduced a variety of measures aimed at addressing concerns related to privacy, bias, transparency, and accountability in AI systems across state departments and private industry.
State policymakers have introduced close to 700 pieces of AI legislation this year alone with dozens becoming law. According to the National Conference of State Legislature, “in the 2024 legislative session, at least 45 states, Puerto Rico, the Virgin Islands and Washington, D.C., introduced AI bills, and 31 states, Puerto Rico and the Virgin Islands adopted resolutions or enacted legislation”.
Examples of state-led AI legislation in 2024 include:
- In March, Tennessee passed a law protecting voice artists from AI cloning.
- In September, California Governor Gavin Newsom enacted over 17 laws covering specific generative AI uses and outcomes, including bills on deepfakes, AI watermarking, child safety, election misinformation, performers’ AI rights and training data transparency.
- Colorado enacted the first comprehensive AI law in May - the Colorado Artificial Intelligence Act (CAIA). The law adopts a risk-based approach to AI regulation and requires companies to inform people when an AI system is being used. While it will not allow individuals to sue over AI use, it sets up a process to investigate potential consequences for bad actors. Additionally, the Colorado Division of Insurance has issued a regulation to prevent life insurers from engaging in race-based discrimination using AI models that rely on external consumer data and information sources.
- Connecticut and Texas have considered similar policies aimed at preemptively regulating AI systems to address perceived “algorithmic discrimination.”
This proliferation of state-led AI regulation is becoming a significant problem for organizations that operate across multiple jurisdictions, such as insurance carriers and banks. With each state introducing its own unique AI policy it becomes incredibly difficult to track and comply with all the different and sometimes conflicting state regulations. This problem is likely to accelerate under Trumps more ‘hands-off’ approach.
However, the new administration also has a unique opportunity to introduce a new AI framework or legislation that obviates the need for State level legislation on common AI concerns. This is not an area that the Biden administration has addressed. It will be very interesting to see how the Trump 2.0 AI policy develops next year.