Liza Shulyayeva

AI for Peace - Notes from inaugural Peter Wallensteen Lecture



I recently got to attend the inaugural Peter Wallensteen Lecture at Humanistiska Teatern. When I originally signed up it already had a waiting list, but I luckily got an email about a spot opening up just a few days before. The series is intended to honor department founder Peter Wallensteen and this first lecture was deliverd by Professor Mary Ellen O’Connell.

I don’t think I fully appreciated that this wasn’t a ’normal’ lecture until I got there. The audience was full of international ambassadors, UN representatives, and other diplomats; security was posted outside; and the speaker herself was a much bigger name than I’d realized in the field of international dispute resolution and conflict law.

As for Peter Wallensteen himself, I had attended his own lecture on nuclear weapons and academic responsibility before and it was so interesting, and so eye-opening, that I am still thinking about it months later. How amazing is it to be able to just go and hear people with so much knowledge and experience in their respective fields?

AI for Peace, inaugural Peter Wallensteen Lecture

The lecture focused on encouraging states to move way from deterrence policies (and spending) and instead focus on emerging disruptive technologies for lawful self-defence.

My notes are below. As usual, my own thoughts will be interjected with [L: highlighted inserts.]


AI for Peace

Inaugural Peter Wallensteen Lecture

Introduction (Professor Ashok Swain, Head of Department of Peace and Conflict Research)

Peter Wallensteen turns 80 this year. Founded Department of Peace and Conflict Research at UU in 1971.

Department became globally recognized as the world’s oldest and largest university department dedicated to peace and conflict research.

The world is witnessing increased civilian suffering and displacement, erosion of democratic principles. Demand for evidence-based policy becomes critical. The rise of AI is revolutionizing every aspect of society; opens new avenues for monitoring ceasefires and violations, but also risks weaponization of autonomous systems, surveillance-driven repression, biases exaggerating social divisions. We need to understand how to harness AI responsibly.


Lecture (Professor Mary Ellen O’Connell)

Robert and Marion Short Professor of Law and Professor of International Peace Studies - Kroc Institute for International Peace Studies at the University of Notre Dame, USA.

The world is scrambling to respond to Donald Trump’s precipitous actions across various issue areas including security. In early March, European leaders announced the intention to spend €800 billion on defense. Denmark’s Prime Minister Mette Frederiksen declared the need to “spend, spend, spend on defense and deterrence.”

O’Connell believes we can spend far less by spending wisely on authentic and lawful self-defense. Deterrence theory persists despite criticism, maintained due to uncertainty about its validity. AI provides a pathway out of deterrence.

There is already significant military AI spending. For true security, AI spending must pivot to lawful self-defense and away from deterrence. Deterrence has failed, often exacerbating conflict.

Two intellectual shifts

If we do not do the above, we risk losing control of AI systems. Geoffrey Hinton and other leading AI researchers warn about potential human extinction risks from uncontrolled AI, labeling it a new form of WMD [L: What kind of AI? What applications of it, specifically?]. The current AI arms race is driven by efforts to achieve a strategic monopoly, risking catastrophic loss of control. Removing deterrence will eliminate self-destruction potential, freeing resources for civil war causes, climate change, poverty, and rule of law reinforcement.

Lecturer critiques realist deterrence theory dominating NATO and major militaries; she instead proposes international law of self-defense as a superior channel. A psychological barrier exists to shifting away from deterrence; AI oriented toward defense could help policymakers confidently pivot [L: Difficult for me to reconcile when lawful self-defence hasn't helped Ukraine much, and Russia's nuclear posturing continues to be used to deter certain levels of defensive support/essentially hamstring Ukraine's capabilities to defend itself].

This rare moment of potential intellectual and technological transformation mirrors post-WWII emergence of nuclear technology.


Defining Deterrence

Realism, Deterrence, MAD, Preemptive War, Proxy War

Modern Day

Claims that deterrence stability is achieved by MAD contradicted by India-Pakistan nuclear conflicts. Nuclear states still engage in military actions short of nuclear warfare, undermining deterrence theory.

John Mearsheimer advocates U.S. using force against China to prevent Asian hegemony.

Critiquing Deterrence

Deterrence theory:

Three reasons to abandon deterrence

Law and morality

Logic and data

Security

AI

Military AI is unregulated by the AI act due to deterrence mentality. Researchers propose “Mutually Assured AI Malfunction” (MAIM), analogous to MAD, relying on threats and sabotage to control AI proliferation.

Lecturer’s proposed AI goal: Defensive AI systems for physical defense, not offensive deterrence, like Iron Dome or Aegis system. Defense-oriented AI could eliminate need for deterrence and support diplomatic conflict resolution [L: Can such a system be built to neutralize nuclear threats? I hope so!]

Replacing Deterrence/Reorienting AI

International law and physical defense

AI as guarantor of physical defense


Q&A

Q: How do you see Swedish security after Sweden joined NATO? A: O’Connell is hopeful about Sweden and Finland joining. Both have exceptional international lawyers. She encourages these lawyers to to read her work on this topic. If we recommit to international law within NATO, original founding principles can reemerge. Lawyers in these newer NATO countries could shift its purpose. There’s considerable NATO buy-in on AI, but it’s fueling an AI arms race - a catastrophe. Scientists advising governments should collaborate with international lawyers to define AI’s role: detecting attacks, serving as an “Iron Dome,” and providing an antidote to offensive capabilities. We need robust cybersecurity and a defensive, rather than offensive, approach. Sweden remained neutral all these years for specific reasons - bring those perspectives to reform NATO, spending less on deterrence and more on AI-based defense.


Q: In an article on deterrence theory titled “The Extortionist’s Doctrine,” Elaine Scarry references Schelling’s thought experiment of exchanging kindergarten classes between the USSR and the U.S. as a hostage mechanism to promote security. Are there non-technological ways to increase shared vulnerability?

A: One reason deterrence persists as the main theory is fear among states to abandon a flawed policy. Many countries don’t need drastic measures like hostage exchanges. For countries stuck in this mindset, they simply need courage to adopt better solutions. The two-part solution is recognizing the catastrophic potential of an AI arms race (loss of control and annihilation risk) and providing an alternative to psychological security. Adequate AI defense can alleviate reliance on nuclear deterrence. Anti-nuclear activism has kept the real horror of nuclear weapons alive, but policymakers need a credible alternative beyond nuclear deterrence.


Q: Do you think we need a new international legal framework for your AI-defense idea?

A: We already have the necessary legal frameworks. The core principles of international law are ancient and unchanging. Liberal countries like the U.S., Australia, and the UK have tried creating a framework granting them superior international status over non-Western, non-democratic countries. This severely damaged credibility and compliance. Instead of a new framework, we need to rediscover and adhere to existing frameworks.


Q: What main questions should peace researchers focus on now?

A: Learn international law and AI science. Researchers like Stuart Russell at Berkeley and Toby Walsh at UNSW express significant fears about AI’s potential. We should learn from their concerns.


Q: Should the UN police the principle of lawful self-defense through AI?

A: Does not like the word “policing”. But yes, UN should be the “locus” for these things. But O’Connell is concerned about the UN’s current capacity. Military dominance has undermined its credibility, evident from its inadequate responses to invasions like Ukraine and Iraq. The UN should become a hub of advanced technological knowledge and leadership. While some might find “policing” to be an unavoidable term, O’Connell prefers focusing on “education,” “orientation,” and “peace.” We can use AI defensively and open new avenues to address current causes of violence. The methodologies to train this kind of AI exist, even though current data we input into these systems is problematic.


Q: What questions should we ask international lawyers, and what platforms should we build?

A: Recently, O’Connell attended meetings with top AI ethicists in Cape Town. The UN should convene gatherings to share critical information. Initiatives like “AI for Good” could become platforms for developing AI for peace. Although the UN is currently exploring various AI initiatives, it needs coordinated leadership (perhaps someone like Peter Wallensteen) to integrate these efforts. Institutions and knowledgeable individuals exist, but we lack the inspiration for change. Some NATO funding from Sweden could shift toward this new initiative. We succeeded in 1945; we can do it again in 2025.


Q: The Ambassador of Guatemala references “War Games” (Matthew Broderick). Could we enter that scenario? Would ASI be realist, idealist, or constructivist?

A: It would be idealist, and since when is that a bad thing? Hans Morgenthau promoted realism, backed notably by diplomat George Kennan, who criticized America’s commitment to legalism and moralism in 1951. Legalism shouldn’t be derogatory! Idealism is valuable and deserves recommitment. Realism is fundamentally non-ideal.

Films like “Oppenheimer” help illustrate these issues, though their impact is limited. We risk a “War Games” or “Terminator” future unless we act soon. A recent loss of momentum was notable among researchers (including Elon Musk and Stephen Hawking), who pledged never to militarize AI. Elon Musk has reversed significantly since then. However, current fears about ASI combined with the promise of genuine defensive capabilities offer a chance to revive these crucial ideas through the UN.


Peter Wallensteen Speaks


[L: As a layman, I couldn't help but be skeptical of this vision of forgoing deterrence completely. In another project I have been working on, I also consider using emerging technologies as opposed to nuclear weapons - but in that case, using them still to deter and coerce. So it was nice to hear that someone who actually knows what they're talking about does think AI and EDTs are _realistic_ tools for defence. I think what I still need to reconcile is the idea of using these tools in a purely non-deterrent capacity.]

Wallensteen on conflict

© 2025 · Liza Shulyayeva · Top · RSS · privacy policy