President-elect Donald Trump repeatedly has vowed to do away with President Joe Biden’s year-old Executive Order (EO) on AI, which emphasized safety, accountability, and oversight of AI systems. What Trump has said he will do, on the other hand, would speed up AI innovation, but it also could substantively increase the risks related to safety, ethics, national security, and economic transition.
At a campaign rally in December 2023, Trump vowed to repeal Biden’s EO on day one, saying that “Republicans support AI development rooted in free speech and human flourishing.”
“When I’m reelected,” Trump said, “I will cancel Biden’s artificial intelligence executive order and ban the use of AI to censor the speech of American citizens on day one.” Trump further declared that Department of Homeland Security Secretary Alejandro Mayorkas had used AI to censor political speech, which there is no evidence of.
Similarly, the 2024 Republican platform reflects a commitment to reducing governmental oversight of AI development, emphasizing instead the importance of free speech and human-centric progress in the field.
“We will repeal Joe Biden’s dangerous Executive Order that hinders AI innovation and imposes radical leftwing ideas on the development of this technology,” the platform says, adding that, “in its place, Republicans support AI development rooted in free speech and human flourishing.”
In the absence of federal guardrails on uncontrolled AI development and use, the US could be left more vulnerable to AI-related risks and less prepared for the broader economic and social impact that AI is expected to have in the coming years.
Should Trump pursue a deregulatory approach to AI, it could significantly impact existing AI policies and directives, including the Justice Department’s draft guidelines on AI and biometric tools, the goal of which is to protect privacy and civil rights while also putting in place policies and guidelines for combating crime facilitated by AI.
A Republican controlled Congress also could kill bills recently introduced in Congress such as HR 10092, which would require each federal agency to have a dedicated civil rights office to “identify, prevent, and address algorithmic bias, ensuring staff have the expertise to analyze and rectify discriminatory outcomes.”
The bill is the latest in a series of directives and proposals that are designed to address the potential for bias in AI applications such as facial recognition technology. “Federal agencies increasingly rely on these technologies to make decisions that profoundly impact people’s lives; however, unchecked algorithmic systems have been shown to unfairly target vulnerable communities,” the bill’s sponsors said.
The measure also would create an interagency working group to coordinate activities to protect civil rights in AI and ensure fair treatment for all communities; and require annual reporting on “he risks posed by algorithmic systems, actions taken to mitigate these risks, and recommended legislative or administrative measures.”
According to the U.S. Commission on Civil Rights, facial recognition’s benefits for law enforcement and civil applications could be outweighed by its negative impact on civil rights if safeguards are not put in place. The Commission said in its report, The Civil Rights Implications of the Federal Use of Facial Recognition, that the use of facial recognition also raises a range of concerns and identified several areas where improvement is needed to safeguard against civil rights violations.
In February 2019, Trump signed Executive Order 13859, Maintaining American Leadership in Artificial Intelligence, which aimed to promote AI development by prioritizing research and development, enhancing access to federal data, and fostering international collaboration. Notably, it did not introduce new regulatory measures for AI technologies.
However, throughout his presidential campaign, Trump expressed intentions to repeal Biden’s Executive Order, which mandated safety testing and oversight for advanced AI systems. Trump and his allies have argued that such regulations hinder innovation and impose restrictive ideologies on AI development.
A draft executive order reportedly has been prepared for Trump which aims to eliminate “unnecessary and burdensome regulations” on AI development. While the text of the proposed order has not been made public, reports indicate that it would call for an immediate review and elimination of regulations deemed to hinder AI innovation with the goal of fostering a more conducive environment for AI development.
The draft order is said to also call for creating agencies led by industry stakeholders to evaluate AI models and safeguard systems from foreign threats, potentially shifting oversight from government bodies to private sector entities. The draft also is said to include plans for a series of “Manhattan Projects” aimed at accelerating the development of military AI technologies, indicating a focus on enhancing national defense through AI, something that is already well underway.
Trump’s deregulatory approach has garnered support from prominent tech figures, including Elon Musk, Marc Andreessen, and Peter Thiel. These individuals advocate for minimal regulation to accelerate AI advancements and to maintain a competitive edge over nations like China.
Critics, though, caution that reducing AI regulations could lead to increased risks, such as the proliferation of biased algorithms, misinformation, and potential misuse in areas like facial recognition, autonomous vehicles and healthcare. They argue that appropriate oversight is essential to ensure AI technologies are developed and deployed responsibly.
Biden’s EO emphasized safety, accountability, and oversight of AI systems and introduced specific requirements for companies working with advanced AI models, mandating that they undergo rigorous testing and to provide safety assurances before deployment. Biden’s order also outlined standards for data privacy, national security considerations, and protections against misuse.
Trump’s EO was minimal in terms of safety protocols and oversight, focusing on enabling development without explicit regulatory requirements. It also encouraged federal agencies to support innovation but lacked concrete regulatory measures for AI accountability.
Biden’s EO called for a regulatory framework for AI safety, specifically for models that pose potential risks, and mandated independent assessments of AI systems and requires transparency about how they function and are designed to address concerns about misinformation, bias, and potential harmful uses.
Trump’s EO encouraged international partnerships in AI development, signaling openness to collaboration with allies for shared advancements, while Biden’s EO prioritized national security, implementing measures to protect against foreign threats and misuse, and to prevent unauthorized use of AI technologies that could impact US interests, putting more emphasis on safeguarding critical AI systems.
If Trump goes ahead and repeals Biden’s executive order, it could have any number of potential detrimental impacts, given that Biden’s order aimed to implement guardrails in the absence of congressional action. Repealing these guardrails could reduce the safety checks currently in place for high-stakes AI applications, potentially leading to increased risks, such as biased decision-making and misinformation.
In addition, Biden’s mandate for companies to disclose how their AI models work and provide accountability for their performance would be weakened, which could limit public understanding of how AI-driven decisions are made and make it harder to address issues if AI systems are found to have harmful biases or inaccuracies.
A repeal of Biden’s order also would likely remove some ethical standards to mitigate issues like bias, discrimination, and misuse of AI, particularly in sensitive areas like surveillance and automated decision-making, which without there is a risk that AI could be deployed in ways that are ethically questionable or potentially harmful to certain demographics.
By removing Biden’s measures to protect AI technology from misuse by foreign actors the US could face heightened risks from espionage, cyberattacks, or unauthorized access to sensitive AI models, especially those that might be used in military or intelligence applications.
Loss of international collaboration on safe AI also could result in the US falling out of alignment with allies who are working toward responsible AI governance, potentially leading to discord in international AI standards.
Repealing or weakening Biden’s measures for protecting consumer data used by AI systems could lead to diluted safeguards around data privacy and security, raising concerns about how personal data might be exploited by AI without adequate oversight. It could also result in fewer consumer rights and protections, potentially eroding public trust in AI.
Should Biden’s federal regulatory framework be eliminated, states might seek to implement their own AI regulations, which could lead to yet a further patchwork of unwieldly laws that could create inconsistencies and uncertainty for companies operating across state lines, potentially complicating compliance and leading to spiraling costs.
In conclusion, if Trump repeals Biden’s executive order and there’s a reduction in oversight, it could weaken existing safeguards on the use of AI, amplifying privacy risks, potential biases, and legal challenges while also fragmenting the regulatory landscape. Such changes could impact civil liberties, public trust, and the integrity of AI tools used by both state and federal law enforcement agencies.
Article Topics
biometric-bias | biometrics | facial recognition | legislation | regulation | responsible AI | standards | United States