top of page

Lethal Autonomous Weapons Systems and the Dehumanisation of War

George Yankovich

The international arms race has entered a worrisome new phase as we inch closer to the widespread deployment of lethal autonomous weapons systems (LAWS).


LAWS are a ‘weapon system that, once activated, can select and engage targets without further intervention by a human operator.’ Science-fiction has captured the popular imagination with stories of self-aware ‘killer robots’, and owing precisely to this dystopian approach, it is all too tempting and dangerous to believe that such a future could not come about.


The ‘killer robots’ of the future will not be shiny, skull-faced bipeds. Their precursors are already out there, and although different to science-fiction’s imagining, they are much more familiar to us. Israel’s Iron Dome anti-missile system is a quintessential example of how Artificial Intelligence (AI) will change the implications of prolonged warfare. It independently detects and shoots down missiles fired from Gaza and has been in use since 2011 with a remarkable roughly 90 per cent efficiency. However, critics consider that what Israel has gained in security, it has sacrificed in long-term diplomatic goodwill. Although the Iron Dome will protect Israel from periodic rocket attacks, it will not remove the motivation that causes them to be fired in the first place, but Israel’s secure position will reduce its need to find a diplomatic solution.


Whether or not one agrees with this sobering assessment, one should at least be open to appreciating that the cost of casualties—as grave as it may be—is often a key consideration when assessing whether it is worth prolonging conflict. Such concerns reared their heads in ground-based Western conflict theatres, like Vietnam, Iraq, and Afghanistan, and less so in air-dominated conflicts, like Kuwait, Yugoslavia, and Syria. It is worth noting that Western nations enjoyed far more favourable outcomes and far more placid public response in the latter cases, precisely owing to the low-risk nature of ‘war from above’ as a strategic tool. When war becomes so asymmetrical to effectively eliminate the cost of waging it—except in purely monetary terms—what else do we rely on but the better angels of our nature to draw it to a close? And even then, is that enough?


The dehumanization of war, if we may so characterize the emerging trend in Western warfare, will only be compounded as human out of the loop weapons systems, like LAWS, become a more common presence on the battlefield. Contrast this to human in the loop systems, in which the decision to engage is still within the hands of a human operator. In theory, the drone pilot must be confident that they are acting with the best available information and minimising collateral damage, knowing that they are morally culpable in the case of miscalculation. Could one say the same of an algorithm, no matter how advanced it is?


These questions will only become more salient as LAWS are deployed for offensive capabilities. Take for example the Israeli-designed Harpy drone, which hones in on radio transmissions and explosively self-destructs. Their efficiency and speed of response was a decisive factor in Azerbaijan’s victory over Armenia in their 2020 border conflict. For context, Armenia lost a third of their tank fleet and six times the amount of military equipment overall. It is arguable that Azerbaijani forces did not win the war on the basis of better organisation and tactics, but because they had access to better and more expendable firepower from their Turkish and Israeli allies; after all what cost is one lost drone compared to a fighter pilot’s life?


The future is decidedly grim. But just as international conferences have led the charge against other non-conventional weapons—like cluster bombs, landmines and chemicals—surely headways have been made into regulating the usage of LAWS, or outright banning them?


Unfortunately, the international community has dithered. In December 2021, the Review Conference on the UN Convention on Certain Conventional Weapons failed to workshop any changes to this crucial treaty that would incorporate limits on the development and deployment of LAWS. Unsurprisingly, the pushback was not from smaller nations—who might understandably be worried about the implications of LAWS on their sovereignty—but superpowers like the United States, Russia, and India, who have only to gain by bolstering their already formidable military strength.


A future in which LAWS are as common on the battlefield as artillery or armoured vehicles is unfavourable to almost everyone—including military personnel, who could be cut out as the ‘middle men’ when the state exercises its monopoly on the legal use of force. The international campaign, Stop Killer Robots, has called for a blanket ban on human out of the loop weapons systems, but amending international law is not enough. We must acknowledge that the problem of LAWS is not a hypothetical what if. It is reality, and we are turning a blind eye to it if we do not attempt address it, discuss it, and solve it as we would any other political problem—only in this case, the consequences of failure may be too grave to comprehend.



George Yankovich is an undergraduate student at the University of Adelaide studying Law and International Relations.

bottom of page