I recently got to attend the inaugural Peter Wallensteen Lecture at Humanistiska Teatern. When I originally signed up it already had a waiting list, but I luckily got an email about a spot opening up just a few days before. The series is intended to honor department founder Peter Wallensteen and this first lecture was deliverd by Professor Mary Ellen O’Connell.
I don’t think I fully appreciated that this wasn’t a ’normal’ lecture until I got there. The audience was full of international ambassadors, UN representatives, and other diplomats; security was posted outside; and the speaker herself was a much bigger name than I’d realized in the field of international dispute resolution and conflict law.
As for Peter Wallensteen himself, I had attended his own lecture on nuclear weapons and academic responsibility before and it was so interesting, and so eye-opening, that I am still thinking about it months later. How amazing is it to be able to just go and hear people with so much knowledge and experience in their respective fields?
The lecture focused on encouraging states to move way from deterrence policies (and spending) and instead focus on emerging disruptive technologies for lawful self-defence.
My notes are below. As usual, my own thoughts will be interjected with [L: highlighted inserts.]
AI for Peace
Inaugural Peter Wallensteen Lecture
Introduction (Professor Ashok Swain, Head of Department of Peace and Conflict Research)
Peter Wallensteen turns 80 this year. Founded Department of Peace and Conflict Research at UU in 1971.
Department became globally recognized as the world’s oldest and largest university department dedicated to peace and conflict research.
- 1972-1999: Wallensteen Served as department head; over 60 PhD graduates.
- 1992: Department had only 10 people; now 100 with a 100,000,000 annual budget.
- Wallensteen shaped understanding of international relations.
- Led inter-governmental program on implementation of targeted sanctions.
- Uppsala Conflict Data Program (UCDP) - world’s largest provider of data on organized violence.
- Contributed to mediation in global conflicts: Papua New Guinea, Israel & Palestine, Cyprus, others.
- Still publishing and teaching.
The world is witnessing increased civilian suffering and displacement, erosion of democratic principles. Demand for evidence-based policy becomes critical. The rise of AI is revolutionizing every aspect of society; opens new avenues for monitoring ceasefires and violations, but also risks weaponization of autonomous systems, surveillance-driven repression, biases exaggerating social divisions. We need to understand how to harness AI responsibly.
Lecture (Professor Mary Ellen O’Connell)
Robert and Marion Short Professor of Law and Professor of International Peace Studies - Kroc Institute for International Peace Studies at the University of Notre Dame, USA.
The world is scrambling to respond to Donald Trump’s precipitous actions across various issue areas including security. In early March, European leaders announced the intention to spend €800 billion on defense. Denmark’s Prime Minister Mette Frederiksen declared the need to “spend, spend, spend on defense and deterrence.”
O’Connell believes we can spend far less by spending wisely on authentic and lawful self-defense. Deterrence theory persists despite criticism, maintained due to uncertainty about its validity. AI provides a pathway out of deterrence.
There is already significant military AI spending. For true security, AI spending must pivot to lawful self-defense and away from deterrence. Deterrence has failed, often exacerbating conflict.
Two intellectual shifts
- Replace theory of deterrence with principle of lawful self-defense.
- Reorient AI from supporting deterrence to supporting defense.
If we do not do the above, we risk losing control of AI systems. Geoffrey Hinton and other leading AI researchers warn about potential human extinction risks from uncontrolled AI, labeling it a new form of WMD [L: What kind of AI? What applications of it, specifically?]. The current AI arms race is driven by efforts to achieve a strategic monopoly, risking catastrophic loss of control. Removing deterrence will eliminate self-destruction potential, freeing resources for civil war causes, climate change, poverty, and rule of law reinforcement.
Lecturer critiques realist deterrence theory dominating NATO and major militaries; she instead proposes international law of self-defense as a superior channel. A psychological barrier exists to shifting away from deterrence; AI oriented toward defense could help policymakers confidently pivot [L: Difficult for me to reconcile when lawful self-defence hasn't helped Ukraine much, and Russia's nuclear posturing continues to be used to deter certain levels of defensive support/essentially hamstring Ukraine's capabilities to defend itself].
This rare moment of potential intellectual and technological transformation mirrors post-WWII emergence of nuclear technology.
Defining Deterrence
Realism, Deterrence, MAD, Preemptive War, Proxy War
- Deterrence is an outgrowth of realism.
- Nuclear deterrence via mutually assured destruction (MAD).
- Supports limited use of force as signal of resolve.
- NATO heavily relies on deterrence, influencing high military spending ($2.7 trillion, rising).
- Thomas Schelling’s deterrence theory preserved realism in a nuclear world.
- Realism was developed in 1930s by E.H. Carr; advocated military strength over law adherence [L: Can deterrence coexist with law adherence?].
- Post-WWII United Nation Charter allowed force only in self-defense or with Security Council authorization.
- Hans Morgenthau revived realist critique, emphasizing military power over uncertain international cooperation (Thucydides Trap). You must attack them before they get too strong and attack you.
- Realists consider war a legitimate tool of statecraft [L: Does one have to subscribe to this extent of realist thinking in order to subscribe to deterrence?].
- U.S. shifted from nuclear monopoly strategy to Mutually Assured Destruction (MAD) after USSR developed nukes in 1949.
- Schelling’s MAD theory: Nuclear capabilities alone provide bargaining power.
- “The monstrous power of modern weapons does indeed act as a deterrent” - Pope John XXIII.
Modern Day
Claims that deterrence stability is achieved by MAD contradicted by India-Pakistan nuclear conflicts. Nuclear states still engage in military actions short of nuclear warfare, undermining deterrence theory.
John Mearsheimer advocates U.S. using force against China to prevent Asian hegemony.
Critiquing Deterrence
Deterrence theory:
- Assumes rational actors, questionable empirically and morally.
- Incompatible with international law, immoral, existentially risky in AI era.
Three reasons to abandon deterrence
Law and morality
- Deterrence contradicts international law prohibiting force except self-defense [L: Once a state chooses to disregard law and morality and invade, what good are law and morality to the victim?].
- UN Charter explicitly limits force use to repelling ongoing attacks, not punitive or preemptive actions.
- Pope Francis condemned nuclear deterrence, signing Nuclear Ban Treaty [L: TPNW?]
- “Nuclear deterrence is not a source of peace but a destabilizing element in the international system that creates a false sense of security, encourages the proliferation of nuclear weapons, threatens the environment, and robs the poor”
- “Nuclear weapons exist in the service of a mentality of fear”
Logic and data
- Criminal law studies show punishment severity is ineffective as a deterrent. Certainty of being detected, not severity of punishment, is what mattes.
- The assumption of rational coercion is flawed; human behavior is influenced by factors other than coercion: reward, altruism, etc.
Security
- MAD never effectively deterred Soviet actions; deterrence relies on credible threat of catastrophic retaliation.
- Deterrence risks accidental nuclear warfare
- Deterrence incentivizes uncontrollable AI systems.
AI
- AI systems vary in autonomy and adaptability.
- AGI (Artificial General Intelligence - human-level intelligence) could evolve into uncontrollable ASI (Artificial Superintelligence - intelligence past human-level).
Military AI is unregulated by the AI act due to deterrence mentality. Researchers propose “Mutually Assured AI Malfunction” (MAIM), analogous to MAD, relying on threats and sabotage to control AI proliferation.
Lecturer’s proposed AI goal: Defensive AI systems for physical defense, not offensive deterrence, like Iron Dome or Aegis system. Defense-oriented AI could eliminate need for deterrence and support diplomatic conflict resolution [L: Can such a system be built to neutralize nuclear threats? I hope so!]
Replacing Deterrence/Reorienting AI
International law and physical defense
- Defensive AI systems are viable and lower cost.
- AI improves physical defense, making deterrence obsolete.
AI as guarantor of physical defense
- Implement via UN-ITU collaboration between international lawyers and AI developers.
- Defensive AI could prevent conflicts diplomatically.
- AI could fulfill UN’s foundational promise to prevent war.
Q&A
Q: How do you see Swedish security after Sweden joined NATO? A: O’Connell is hopeful about Sweden and Finland joining. Both have exceptional international lawyers. She encourages these lawyers to to read her work on this topic. If we recommit to international law within NATO, original founding principles can reemerge. Lawyers in these newer NATO countries could shift its purpose. There’s considerable NATO buy-in on AI, but it’s fueling an AI arms race - a catastrophe. Scientists advising governments should collaborate with international lawyers to define AI’s role: detecting attacks, serving as an “Iron Dome,” and providing an antidote to offensive capabilities. We need robust cybersecurity and a defensive, rather than offensive, approach. Sweden remained neutral all these years for specific reasons - bring those perspectives to reform NATO, spending less on deterrence and more on AI-based defense.
Q: In an article on deterrence theory titled “The Extortionist’s Doctrine,” Elaine Scarry references Schelling’s thought experiment of exchanging kindergarten classes between the USSR and the U.S. as a hostage mechanism to promote security. Are there non-technological ways to increase shared vulnerability?
A: One reason deterrence persists as the main theory is fear among states to abandon a flawed policy. Many countries don’t need drastic measures like hostage exchanges. For countries stuck in this mindset, they simply need courage to adopt better solutions. The two-part solution is recognizing the catastrophic potential of an AI arms race (loss of control and annihilation risk) and providing an alternative to psychological security. Adequate AI defense can alleviate reliance on nuclear deterrence. Anti-nuclear activism has kept the real horror of nuclear weapons alive, but policymakers need a credible alternative beyond nuclear deterrence.
Q: Do you think we need a new international legal framework for your AI-defense idea?
A: We already have the necessary legal frameworks. The core principles of international law are ancient and unchanging. Liberal countries like the U.S., Australia, and the UK have tried creating a framework granting them superior international status over non-Western, non-democratic countries. This severely damaged credibility and compliance. Instead of a new framework, we need to rediscover and adhere to existing frameworks.
Q: What main questions should peace researchers focus on now?
A: Learn international law and AI science. Researchers like Stuart Russell at Berkeley and Toby Walsh at UNSW express significant fears about AI’s potential. We should learn from their concerns.
Q: Should the UN police the principle of lawful self-defense through AI?
A: Does not like the word “policing”. But yes, UN should be the “locus” for these things. But O’Connell is concerned about the UN’s current capacity. Military dominance has undermined its credibility, evident from its inadequate responses to invasions like Ukraine and Iraq. The UN should become a hub of advanced technological knowledge and leadership. While some might find “policing” to be an unavoidable term, O’Connell prefers focusing on “education,” “orientation,” and “peace.” We can use AI defensively and open new avenues to address current causes of violence. The methodologies to train this kind of AI exist, even though current data we input into these systems is problematic.
Q: What questions should we ask international lawyers, and what platforms should we build?
A: Recently, O’Connell attended meetings with top AI ethicists in Cape Town. The UN should convene gatherings to share critical information. Initiatives like “AI for Good” could become platforms for developing AI for peace. Although the UN is currently exploring various AI initiatives, it needs coordinated leadership (perhaps someone like Peter Wallensteen) to integrate these efforts. Institutions and knowledgeable individuals exist, but we lack the inspiration for change. Some NATO funding from Sweden could shift toward this new initiative. We succeeded in 1945; we can do it again in 2025.
Q: The Ambassador of Guatemala references “War Games” (Matthew Broderick). Could we enter that scenario? Would ASI be realist, idealist, or constructivist?
A: It would be idealist, and since when is that a bad thing? Hans Morgenthau promoted realism, backed notably by diplomat George Kennan, who criticized America’s commitment to legalism and moralism in 1951. Legalism shouldn’t be derogatory! Idealism is valuable and deserves recommitment. Realism is fundamentally non-ideal.
Films like “Oppenheimer” help illustrate these issues, though their impact is limited. We risk a “War Games” or “Terminator” future unless we act soon. A recent loss of momentum was notable among researchers (including Elon Musk and Stephen Hawking), who pledged never to militarize AI. Elon Musk has reversed significantly since then. However, current fears about ASI combined with the promise of genuine defensive capabilities offer a chance to revive these crucial ideas through the UN.
Peter Wallensteen Speaks
- Emphasized potential of logical arguments, historical lessons, and data to demonstrate peace achievable.
- Nuclear weapons trend: Rose until 1985, declined due to internal security rethinking.
- Concerns with current increase in nuclear weapons and armed conflicts driven by military spending.
[L: As a layman, I couldn't help but be skeptical of this vision of forgoing deterrence completely. In another project I have been working on, I also consider using emerging technologies as opposed to nuclear weapons - but in that case, using them still to deter and coerce. So it was nice to hear that someone who actually knows what they're talking about does think AI and EDTs are _realistic_ tools for defence. I think what I still need to reconcile is the idea of using these tools in a purely non-deterrent capacity.]