How AI is reshaping the rules of war
By Inge Ruigrok
Rather than starting from treaties and diplomatic declarations, her ERC-funded project AUTONORMS asks a deceptively simple question: how do everyday practices make norms? In international relations, new norms are typically established in public forums, such as UN debates, where states present their positions and negotiate soft or hard law – written documents that are understood as expected to guide state behaviour.
Bode argues that norms also emerge from the bottom up, through the practices that states engage in. ‘Especially for technologies in warfare, there is often a long history of use before diplomats debate them as something that needs to be governed,’ she says. Historical examples, such as landmines, show that weapons can be used for decades before they become the subject of international conventions.
During that period of quiet experimentation and operational routine, powerful understandings of what is normal and acceptable begin to take shape. Such shared understanding of appropriate behaviour – or social norms – is often implicit and not written down.
They may also not be universally appropriate, says Bode. ‘Slavery, for example, was once a norm in the sense that it expressed an idea of appropriate behaviour and was normalised, even though we now consider it as utterly inappropriate. Appropriateness, inherent to norms, is not necessarily shared by everyone – but such norms still emerge, and they strongly influence, for instance, how states and militaries design and employ new technologies.’
When practice becomes the rule
It is this quiet, often unquestioned power of norms that interests Bode when it comes to autonomous weapons. Instead of treating them as a distant, futuristic threat, she follows them into today’s war rooms. She looks at how AI‑enabled weapon systems – from targeting software to autonomous drones – are designed, tested, and folded into everyday military routines, and how these routines reset expectations about what is acceptable in war. By tracing such practices in China, Japan, Russia and the United States, she demonstrates that, even when diplomats move slowly, patterns of use and experimentation are already rewriting the international norms on the use of force.
It is not difficult to see why this matters now. ‘If you look at the global headlines this week, there is a lot of reporting around the use of AI in warfare, both in weapon systems and more broadly in military decision making,’ Bode says. ‘When we started back in 2020, it still had a bit of a science‑fiction element. But in the past years, we have seen much more use of various types of such systems in warfare, and people are getting more concerned, especially since the invasion of Ukraine and, more recently, with the war involving Iran.’
Watching how states argue
Doing this kind of research is anything but straightforward. Much of the relevant practice is hidden from public view, protected by secrecy and security classifications. Bode’s team, therefore, combines methods. They attend UN meetings as observers and engage in informal discussions with diplomats and experts on the sidelines. Over time, this has made them trusted interlocutors and opened the door to expert meetings under the Chatham House rule, where officials, international organisations and industry representatives speak more freely.
These insights are complemented by painstaking open‑source work. The team reads manufacturers’ press releases, technical brochures and interviews with company representatives to understand AI‑enabled systems. They cross‑check military systems against similar civilian applications, where technical documentation is often more accessible, because the underlying technologies are largely the same.
While direct observation of military exercises, especially in Russia and China, is not possible, Bode was surprised by how much can be learned from triangulation of informal conversations and publicly available documents. ‘In some instances, major failures occurred in certain systems or their predecessors. For example, in my research on air defence systems, I found notable cases of fratricide. Such crises often prompted greater openness: more people spoke up, and more reports appeared on how the systems operated.’
Eroding human control
A major concern is how these practices change the human role in decisions on the use of force. ‘Early debates on autonomous weapons often centred on the claim that we need direct human supervisory control over specific use‑of‑force decisions and that anything falling short of this should be prohibited,’ says Bode. ‘But we already have systems that integrate automated or autonomous technologies, such as air defence systems. In these systems, a human supervisor receives an output from the system – for example, this is a target, it should be attacked – and must decide yes or no.’
For Bode, this raises a stark question: is such a person truly exercising control, or merely rubber-stamping the system’s output? ‘Sometimes they have as little as ten seconds to make a decision, which is hardly enough time to independently verify that the target should be attacked’, she says. ‘By letting AI do much of the analysis, the human becomes a passive supervisor rather than an active controller. Adding AI technology increases complexity in ways that can exceed human cognitive capacities. I find this concerning: we comfort ourselves with the idea that there is still a human ‘in the loop’, but that human may not have the necessary situation awareness in the moment.’
This erosion of human agency is one aspect of what Bode calls a ‘governance gap’ around autonomous weapons. ‘Many states have become very invested in spelling out what the criteria for human control are, and how to ensure human control throughout the entire life cycle of AI‑based systems. Choices that determine this are often made at the design stage: how can humans understand the basis on which the AI reaches a certain output, how can they question that output, and so on? These questions need to be addressed during the design and testing phase. Once a system is in use, there are limits to what can be changed.’
Shifting positions
AUTONORMS also highlights how major powers navigate this emerging landscape. ‘In the United States, the official position shifted from strong scepticism towards regulation – arguing that existing international humanitarian law is sufficient – to, under the Biden administration, a greater openness to soft‑law approaches’, Bode explains. The US put forward ‘responsible AI’ principles intended to guide military use of AI. More recently, however, we see a turn back towards deregulation and increasing pressure on companies that try to draw red lines on the use of their technologies in warfare.’
China, by contrast, has maintained a more ambivalent position, says Bode. At first, it backed calls to negotiate a new treaty banning some AI weapon systems, joining Global South countries advocating for stricter international law. ‘Over time, however, it became clear that China’s proposed prohibitions were narrowly defined and excluded much of what is already happening in practice. As dynamics with the US have evolved, China has also sought to preserve greater room for manoeuvre in developing these systems.’
‘If we look beyond official positions and focus on practices, we see further nuances,’ Bode continues. ‘Domestically, China has moved strongly towards regulating civilian AI applications to limit risks associated with technologies such as generative AI. I am very interested in whether there is a spillover into the military domain. If systems are developed to comply with domestic regulations in the civilian sphere and then applied to military uses, they have already been subject to some regulatory constraints.’
‘Overall, I am keen to unpack these differences rather than treating states as having a single unified position. There are many forces pulling them in different directions, and those tensions are reflected in their practices.’
Slippery slope
For Bode, these cross‑domain connections are crucial. She worries about a ‘slippery slope’ in which accepting reduced human control in high‑stakes military settings normalises similar reductions in low-risk, everyday contexts – from policing to welfare or employment.
Too often, she argues, AI is presented as something that simply happens to us, as if it followed an inevitable trajectory and humans could only react. ‘I think it is crucial for policy‑makers and designers to realise that they are actors in this process. They make choices about technologies, and those choices have deep normative consequences.’
The AUTONORMS project goes beyond analysis. In recent years, Bode has actively contributed her findings to policy and public debates. She regularly intervenes in UN meetings, collaborates with the UN Institute for Disarmament Research (UNIDIR), and has submitted evidence to parliamentary and UN inquiries. She served on the Global Commission on Responsible AI in the Military Domain, financed by the Netherlands, which brought together technical, legal and policy experts to translate research into concrete policy advice.
Building on this research, Bode was awarded an ERC Proof of Concept grant to design a toolkit offering practical guidance on strengthening human control throughout the AI lifecycle. ‘My hope is still that we will have binding international law for responsible AI in the military domain, but even then, we will need bottom‑up practices to make it real – in ministries of defence, through military doctrine and training, and via industry standards,’ she says. ‘Whatever happens at the top, this implementation level is where change can and must happen.’

Biography
Ingvild Bode is a Professor of International Politics at the University of Southern Denmark, where she is also Director of the Center for War Studies. Professor Bode’s research examines how applications of AI in the military domain change international norms, especially on the use-of-force. She has published extensively in these areas, including in journals such as the European Journal of International Relations, Review of International Studies and Ethics and Information Technology. Professor Bode leads large-scale research projects in this space, including the ERC-funded AutoNorms project and the Independent Research Council Denmark-funded HuMach project. She also serves as an academic ambassador of the Lawful by Design Initiative led by Article 36 Legal.