Commentary  

Weaponized AI: A New Era of Threats and How We Can Counter It

Allen Lab for Democracy Renovation Fellow Dr. Shlomit Wagman lays out a framework to address the threats artificial intelligence poses to global security and democratic institutions.

Photo by Sasun Bughdaryan, Unsplash

Artificial intelligence has emerged as both a transformative tool and a potential weapon of unprecedented sophistication, presenting serious risks to national security and democratic institutions. Adversarial actors are increasingly exploiting AI’s capabilities to destabilize societies, fuel cybercrime, disrupt public order, and undermine democratic institutions. AI’s unique combination of scale and accessibility creates unprecedented security challenges that our current governance frameworks are not equipped to address, requiring immediate and comprehensive attention.

Understanding the Landscape of AI Threats

The threats posed by AI are multifaceted and increasingly sophisticated. AI has dramatically lowered barriers to cyberattacks, deepfake-driven manipulation, misinformation campaigns, large-scale fraud, and social engineering, creating unprecedented challenges for governments and private sector stakeholders. Here are a few examples of unique risks posed by AI.

Psychological Warfare and Misinformation: AI has emerged as a powerful tool for psychological warfare, enabling the creation of highly targeted misinformation and deepfake campaigns that can fabricate false diplomatic crises, provoke international conflicts, or incite large-scale panic among civilian populations. For example, AI-generated deepfakes of a head of state making inflammatory remarks could rapidly escalate international tensions to the brink of war before verification occurs. Such operations exploit social divisions, radicalize individuals with hyper-targeted content, and incite unrest.

Election Interference & the Manipulation of Democratic Processes: The ability of AI to manipulate political discourse and democratic institutions is a growing threat to global stability. AI-generated misinformation and deepfake technology can fabricate political scandals, alter public records, and misrepresent candidates, misleading voters and eroding trust in electoral processes. These AI-driven disinformation campaigns may not only influence election outcomes but also weaken democratic resilience by sowing distrust in legitimate institutions.

Cybercrime and Financial Fraud: Social engineering scams, once limited by human deception, are now fully automated, hyper-personalized, and deployed at unprecedented scales. The emergence of tools like FraudGPT has empowered cybercriminals to generate high-quality phishing campaigns that perfectly replicate banking websites within seconds. AI agents are now used to craft millions of personalized scams tailored to victims’ specific digital profiles and psychological vulnerabilities. Deepfakes are also used to defeat biometric verification systems,  create fraudulent documents and videos sophisticated enough to bypass Know Your Customer (KYC) and Anti-Money Laundering (AML) regulations, and enable social engineering fraud, e.g., the fraudulent CFO who instructed an employee to transfer $25 million during a fake virtual call.

Critical Infrastructure Attacks: Perhaps most alarming is AI’s potential in developing unconventional weapons and cyber threats. The technology is increasingly used to identify and exploit security vulnerabilities in defense systems, corporate networks, and critical infrastructure. By automating cyber reconnaissance and penetration testing, AI enables adversarial actors to execute highly adaptive, autonomous cyberattacks at speeds far beyond human capabilities. These attacks could potentially disable military communications, manipulate satellite systems, or disrupt power grids, posing direct threats to national security. Beyond cyber threats, AI lowers barriers for the development of nuclear and bioweapons, automated hacking tools, and AI-optimized malware, allowing hostile nations and terrorist groups to build unconventional weapons with minimal resources.

Framework to Address AI Threats

To address these threats while promoting innovation, a global coordinated effort is needed, engaging both governments and the private sector. Initiating global policies that enhance AI safety and security requires a comprehensive, multi-pronged approach:

International Governance and Standards: AI threats transcend national borders, and without global coordination, we risk a race to the bottom in safety and security standards. Western countries should join forces to ensure that governments and private sector actors globally align on shared principles and enforceable safeguards. One of the most effective models for such coordination is the Financial Action Task Force (FATF), which sets mandatory global standards for financial integrity and compliance by both governments and the private sector. A similar approach could be applied to AI governance, requiring baseline global standards for safe AI development; integration of AI safety measures into national policies across jurisdictions; independent expert evaluations and audits to assess adherence; consistent obligations for the private sector worldwide, preventing regulatory arbitrage; and regular reporting and enforcement mechanisms to ensure compliance. Without such a framework, AI safety efforts will remain fragmented, leaving gaps that adversarial actors will exploit.

Market Incentives for AI Security and Safety: Despite massive investments in AI development, AI security and safety remain critically underfunded. Policymakers should create financial and operational incentives for developing AI safety solutions, including AI-driven fraud detection, deepfake authentication, and advanced cybersecurity tools. Public-private partnerships should accelerate research into AI adversarial defense mechanisms. Moreover, developers should be encouraged, and in some cases required, to integrate risk assessments and adversarial testing for high-risk AI applications, and adopt defensive AI mechanisms that detect and mitigate adversarial manipulation.

Public Awareness: Comprehensive public education is crucial in building resilience against AI-driven threats, including misinformation, fraud, and cyber-enabled social engineering. Governments, industry, and academia should raise awareness of AI risks, particularly in election interference, deepfakes and impersonation, and financial crime. They should promote digital competence and responsible AI use, ensuring the public understands how to identify and respond to AI-generated deception.

Regulatory Frameworks: Consider whether a regulatory framework is necessary to enhance the above recommendations, recognizing that certain AI risks – such as national security threats, cybercrime, and threats to democratic values – may not be adequately addressed by the private sector on a voluntary basis. As noted, establishing a global framework and enforceable standards is highly recommended to ensure consistent implementation across jurisdictions, prevent regulatory gaps, and promote fair competition. This approach would help align stakeholders worldwide, ensuring AI safety measures are adopted broadly while supporting responsible innovation.

Conclusion

The stakes are existential. Without coordinated, proactive intervention, AI’s potential to manufacture consensus and manipulate societal systems will continue to pose an unprecedented challenge to global security and democratic institutions.

As technological capabilities evolve, our approach to managing AI must be equally dynamic, collaborative, and forward-thinking. The future of our global security and democratic institutions depends on our ability to effectively mitigate AI’s most serious risks.

​​I would like to thank Sarah Hubbard for her contribution to this commentary. 

More from this Program

Why US States Are the Best Labs for Public AI

Additional Resource

Why US States Are the Best Labs for Public AI

In a recent piece for Tech Policy Press, Allen Lab Senior Fellow Alex Pascal and Nathan Sanders outline how US states are well-positioned to lead the development of Public AI. State governments can act as “laboratories of twenty-first century democracy” to experiment with AI applications that directly benefit citizens.

Understanding DOGE and Your Data
DOGE

Additional Resource

Understanding DOGE and Your Data

Over the past several weeks, the Department of Government Efficiency (DOGE) within the Trump Administration has been embedding staff in a range of United States federal agencies. These staff have gained access to data maintained by the federal government. This guide explains what is in the data, what DOGE is doing with it, and why it matters to all Americans.

More on this Issue

AI on the Ballot: How Artificial Intelligence Is Already Changing Politics
An American flag in the background with a robot hand holding a ballot box

Commentary

AI on the Ballot: How Artificial Intelligence Is Already Changing Politics

At a recent Ash Center panel, experts and AI developers discuss how AI’s influence on politics has evolved over the years. They examine the new tools available to politicians, the role of humans in AI’s relationship with governance, and the values guiding the design of these technologies.

Why US States Are the Best Labs for Public AI

Additional Resource

Why US States Are the Best Labs for Public AI

In a recent piece for Tech Policy Press, Allen Lab Senior Fellow Alex Pascal and Nathan Sanders outline how US states are well-positioned to lead the development of Public AI. State governments can act as “laboratories of twenty-first century democracy” to experiment with AI applications that directly benefit citizens.

Understanding DOGE and Your Data
DOGE

Additional Resource

Understanding DOGE and Your Data

Over the past several weeks, the Department of Government Efficiency (DOGE) within the Trump Administration has been embedding staff in a range of United States federal agencies. These staff have gained access to data maintained by the federal government. This guide explains what is in the data, what DOGE is doing with it, and why it matters to all Americans.