top of page

Death by Algorithms: The Rise Of Biased Killer Robots

Angela Suriyasenee | Cyber & Tech Fellow

Image credit: Ricardo Gomez Angel

The Wartime Debut of Kargu-2


In 2020, a flying killer robot - also known as the Turkish-designed STM Kargu-2 drone – initiated an attack on human targets without human command, likely for the very first time. Libya’s Former Prime Minister, Fayez Al-Serraj, launched the weapon as part of Operation Peace Storm, in an offensive move against General Khalifa Haftar’s Libyan National Army, and their siege on the city of Tripoli and its citizens. A UN report states that these lethal autonomous weapons systems (LAWS) “hunted down and remotely engaged” the retreating Haftar-affiliated-forces, who had ‘no real protection’ against these drones and so-called swarming capabilities.


The ambiguity of the UN report on Libya, and the lack of clarity on whether there were any confirmed kills whilst the Kargu-2 was in its autonomous mode, highlight the imminent challenges surrounding the laws and regulations that govern the use of this kind of weaponry.

What are Killer Robots?


‘Slaughterbots’, like the Kargu-2, operate autonomously using advanced machine learning, real-time image processing, and sensor technologies, rather than constant information transmission between the machine and an operator. These robots exemplify a genuine "fire, forget, and find" capability, whereby they can guide themselves towards their intended target once launched. But here’s the problem. Without meaningful human control or intervention, life-or-death decisions are being entrusted to machines that draw from inaccurate algorithms and training datasets. There are grave concerns regarding the ability of fully autonomous weapons to abide by international humanitarian law, which encompass the principles of distinction, proportionality, and military necessity. This unsettling reality brings forth a new level of terror that must be addressed: biased killer robots.

“Pale Male Data”, Digital Dehumanisation and Biases Against Marginalised Communities


The reality is, machines do not perceive us as people. To them, we are merely another piece of code to be organised and processed, and flawed humans design flawed machines. Central to the ethical concerns and campaigns against killer robots are the harms of digital dehumanisation. Digital dehumanisation is “a process in which human beings are reduced to data points that are then used to make decisions and/or take actions that negatively affect their lives”.


The technology we create inherits our conscious and unconscious racial biases, gender biases, and other global inequalities. These injustices are then integrated and embedded into the training datasets and algorithms we use to develop these killer machines, informing their ability to make life-or-death decisions. Facial recognition technology (FRT) and AI-based technologies disproportionately affect historically marginalised communities. For example, FRT makes some troubling errors when identifying women, children and people of colour (POC). MIT researcher Joy Buolamwini emphasises how facial and vocal recognition technologies struggle to identify POC, favouring lighter-skinned and outwardly masculine characteristics. Error rates in FRT software exceeded 19 per cent in the case of individuals with darker skin tones, and worsened when they faced the intersection of race and gender. For instance, the recognition of dark-skinned women had an error rate of 34.4 per cent. Other studies, in the criminal justice context, have found significantly high errors rates and misidentification of POC as criminals. This “coded gaze” as Buolamwini puts it, can have very real and very dire impacts on the battlefield. The misidentification of a civilian for a combatant can lead to fatalities.


The implications of such errors and the potential harm of misidentification in the context of LAWS are alarming. By using these weapons we risk violating a number of human rights, such as the right to life, the right to due process, the right to privacy, humanitarian protection, and the right to dignity. Legal safeguards must be prioritised and implemented to regulate the rapid development of LAWS. The Struggle For A Treaty


Human Rights Watch, the Stop Killer Robots Campaign, and the International Committee of the Red Cross are front and centre in calling for the prohibition of LAWS, and they are not alone. Experts in the field have advocated for the initiation of a treaty-making process, citing serious threats to security and humanitarian risks that could result from the use of LAWS as weapons of mass destruction. Expressions of support for stronger regulations have also been received from within the industry, with Boston Dynamics and five other robotics companies pledging to not weaponise their robots and calling on others to make similar commitments against the development of LAWS. While propositions for an outright ban face resistance from major powers, more than 80 states have now joined the call for a legally-binding instrument. Over 30 Latin American and Caribbean nations made history earlier this February by issuing the Belén Communiqué, which acknowledged the urgent need for a legally-binding instrument for lethal autonomous weapons. This Communiqué was the first regional statement on LAWS to be established outside of the stagnating United Nations.


Ten years into talks, and the UN’s Group of Governmental Experts’ most recent meeting on LAWS concluded yet again without meaningful progress. The influence of states deeply invested in autonomous weapons continue to obstruct meaningful outcomes, while the exclusion of civil society from report commentary resulted in a document lacking substance. The prejudices and biases in our emerging technologies are not new, but the wartime debut of Kargu-2 signifies a new dawn for racial oppression and violence at the hands of killer robots, not to mention the risk of a destabilising arms race which could trigger the “third revolution in warfare”. A treaty must be established to prevent this slide to digital dehumanisation, and time is of the essence.


Angela Suriyasenee is the Cyber & Tech Fellow for Young Australians in International Affairs.

  • Instagram
  • Facebook
  • Twitter
  • LinkedIn
acnc-registered-charity-logo_rgb.png

Young Australians in International Affairs is a registered charity with the Australian Charities and Not-for-Profits Commission.

© 2023 Young Australians in International Affairs Ltd

ABN 35 134 986 228
ACN 632 626 110

bottom of page