By and -

Snapshot

  • The Russia-Ukraine war has led to a testing ground for either side to develop and use new drone armaments and technology to fight in wars.
  • Lethal autonomous weapons systems pose a threat to both combatants and civilians. Such weapons are likely to result in war crimes which violate the principles of warfare and human rights.
  • There is a growing chorus of civil society, industry and governments calling for the immediate prohibition of such weaponry.

Russia’s unlawful invasion of Ukraine came as a surprise for many, as it has pursued its geopolitical ambitions through violence and bloodshed, and in clear violation of international law.

As the Russia-Ukraine war drags into its second year of open conflict, we now know more about both sides’ military capabilities and willingness to engage in a new era of tech-facilitated combat.

While the world has previously observed drones conducting operations in battlefields, most notably in conflict in the Middle East where the U.S. often relied upon drone attacks, we have never seen open conflict between nations use drones so widely as part of their military strategies. This conflict has inevitably led to a ‘testing ground’ for either side to develop and deploy new drone armaments to fight in wars which more readily utilise technology.

One of the key concerns with developing and deploying such armaments has been the proliferation of autonomous weaponry – often referred to as lethal autonomous weapons systems (‘LAWS’). The Australian Human Rights Commission (‘Commission’) has consistently highlighted the need to ensure that human rights are central to emerging technologies. This has extended to recent calls for a prohibition on LAWS due to the unacceptable risk they pose to human rights.

The use of autonomous weapons poses an existential threat to combatants and civilians alike, and is likely to result in war crimes which violate the principles of warfare and human rights.

What are autonomous weapons?

Lethal autonomous weapons systems are forms of armament which independently select and attack targets without meaningful human control (‘Autonomous Weapons Systems: Technical, Military, Legal and Humanitarian Aspects’, Expert Meeting, International Committee of the Red Cross, Geneva, Switzerland 2014, 7). They often operate via artificial intelligence (‘AI’) and facial recognition technology to go to a location, seek out a target and execute an attack – all without human supervision or intervention (See generally Toby Walsh, ‘Machines Behaving Badly: The Morality of AI’ (Black Inc, Melbourne, 2022) 84-102). This technology is usually integrated with vehicles such as drones or boats.

Although this technology may seem like science fiction, it is very real. General Angus Campbell, Chief of the Defence Force of Australia, recently stated that ‘new military capabilities are proliferating in the Indo-Pacific, including … automatic and autonomous systems’.

Even Elon Musk, and a group of over 100 AI and technology experts, have recognised the reality of this technology and its challenges, calling for a ban on autonomous weapons systems and warning of the potential for a ‘third revolution in warfare’ if more is not done.

Autonomous weaponry in combat

Autonomous STM Kargu-2 drones were allegedly first used in March 2021 to ‘hunt’ soldiers who were fleeing in the Libyan civil war (Maria Cramer, ‘A.I drone may have acted on its own in attacking fighters, U.N Says’, NY Times, (3 June 2021)). According to one United Nations’ report, once combatants were in retreat, they were subject to continual harassment and attacks from lethal autonomous weapons systems – this resulted in significant casualties (UN Report S/2021/229 p 17).

Since then, autonomous weapons have played an increasing role in warfare. For instance, the war in Ukraine has seen increases in autonomous weapons systems being used in ‘drone-hunting’ – a defensive system which allows radar to identify incoming hostile drones and fire nets to incapacitate the attack (Frank Bajak & Hanna Arhirova, ‘Drone advanced amid war in Ukraine cold bring fighting robots to the front lines’, PBS News Hour).

In September 2022, Ukraine successfully used autonomous ‘drone boats’ in an attack against the Russian navy at Sevastopol (Adam James Fenton, ‘Ukraine: how uncrewed boats are changing the way wars are fought at sea’ The Conversation (Article, 21 March 2023)).

In wartime it is human decision-making and morality which we rely upon to save lives and prevent war crimes.

Even when not being used in the battlefield, LAWS are making military headlines for all the wrong reasons. Earlier this year, a USAF official was misquoted as stating that the Air Force had conducted a simulated test where an AI drone killed its human operator in order to achieve its strategic objective. Although this was never a live test, it is a deeply disturbing thought experiment highlighting that ‘killer robots’ are no longer confined to the realm of fiction.

There are reports that Australia is also making investments in LAWS. The Royal Australian Air Force is anticipating the arrival of several 12 metre-long uncrewed ‘Ghost Bat’ aircraft to assist in protecting Australia’s F-35 fighter jets.

It has also been reported that the Australian Defence Force is testing uncrewed surveillance naval boats and developing a six metre-long ‘Ghost Shark’ uncrewed submarine.

The publicly released version of the National Defence: Defence Strategic Review 2023 does not make specific reference to LAWS. However, it indicates that it is a priority that the Air Force be able to maintain ‘crewed and autonomous systems capable of air defence’.

Autonomous terror weapons

It is not only state-sanctioned use of lethal autonomous weapons that is concerning. There is a real risk that non-state actors with political motives may be willing, and able, to utilise this technology to further their agenda through violence and the dissemination of fear amongst a populace.

3D printing is now increasingly available to everyday members of society who can use it to quickly and economically print 3-D plastic pieces to build drones.

Generative AI has also been shown to be very effective at producing working lines of coding to run programs.

With both technologies available and inexpensive, it would not be difficult for organised terrorist groups, or even ‘lone wolf’ operatives, to wreak havoc at mass – ushering in a new era of tech-facilitated terrorism.

Principles of warfare

The principles of warfare, jus ad bellum (one’s right to conduct war) and jus in bello (one’s rights within war), should not be ignored and are crucial principles of international law. These rules guide conduct during warfare and seek to minimise suffering that is often, and unfortunately, inevitable in any conflict.

In particular, lethal autonomous weapons systems challenge the jus in bello principles of ‘distinction’ and ‘proportionality’ (Toby Walsh, ‘Machines Behaving Badly: The Morality of AI’ (Black Inc, Melbourne, 2022) 98-100).

In war, participants must distinguish combatants from non-combatants as only military targets may be attacked during operations. While a human combatant may easily discern a combatant from a civilian based on clothing, insignia or other subtle ques – these are nuances that lethal autonomous weapons systems could miss.

What is particularly concerning about autonomous weapons is the lack of human oversight. In wartime it is human decision-making and morality which we rely upon to save lives and prevent war crimes. However, the fast development of this technology has meant there are currently significant problems with the underlying technologies that may result in untold human tragedy. Autonomous weapons may struggle to accurately identify a combatant from a civilian due to inaccuracies in the underlying facial recognition technology. This directly undermines their ability to uphold the principle of ‘distinction’ during warfare and could result in war crimes being committed and civilian lives indiscriminately taken.

In military operations, attacks on military targets must also be proportionate. A strike on a military target which would result in more civilian losses than combatant losses is not proportionate and must be avoided. As human beings, individuals must make such decisions to minimise collateral damage. However, AI is not capable of making decisions where it must weigh the loss of civilian life against strategic objectives.

When it comes to determining questions of proportionality during war, this is an assessment that should only ever be made by a human. Technology cannot accurately understand the value of a life and therefore the consequences of taking one. To allow a machine to determine proportionality in warfare is fundamentally wrong. United Nations Secretary-General Antonio Guterres has called giving machines the discretion to take human life as ‘morally repugnant’ (Toby Walsh, ‘Machines Behaving Badly: The Morality of AI’ (Black Inc, Melbourne, 2022) 85-86).

Aggressors across the globe are witnessing a new way to conduct warfare – and not for the better.

Prohibition on lethal autonomous weapons

While there are always strategic defensive reasons to ensure robust military capabilities, it is imperative that no country pursues fully autonomous weapons systems due to the incredible risks to human rights.

Thankfully, there is a growing chorus of civil society, industry and governments calling for the immediate prohibition of such weaponry.

One of the most prominent voices has been the Stop Killer Robots campaign, run by Human Rights Watch to ban lethal autonomous weapons systems. As of 2020, 96 countries had publicly voiced concerns relating to the ethical and human rights implications of the use of lethal autonomous weapons systems. More recently in 2023, 33 States from Latin America called for ‘urgent [negotiations] of an international legally binding instrument on autonomy in weapons systems.

Utilising autonomous weapons

However, not everyone believes these autonomous weapons require strict regulation and prohibition.

The Council on Foreign Relations shares a contrasting opinion to many other international organisations. It argues that international law must develop alongside technology, rather than curtailing technological development by imposing a blanket ban on lethal autonomous weapons (Hitoshi Nasu and Christopher Korpela, ‘Stop the “Stop the Killer Robot” Debate: Why We Need Artificial Intelligence in Future Battlefields’ Council for Foreign Relations (Blog Post, 21 June 2022)). The Council on Foreign Relations even advocates for lethal autonomous weapons systems. They suggest that due to a drone’s ability to engage in more targeted combat and thus reduce civilian casualties, lethal autonomous weapons mitigate the risk of human error and reduce the need for high explosives.

However, such claims ignore the difficulties of lethal autonomous weapons systems adhering to the principles of ‘distinction’ and ‘proportionality’ in reducing civilian casualties. More concerning is that it overlooks fundamental flaws and inaccuracies in the underlying technology. Most noticeable are the technical limitations of facial recognition technology in identifying women, or people from minority racial groups, as compared with other people (See e.g. Joy Buolamwini and Timinit Guru, ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’ (2018) 81 Proceedings of Machine Learning Research 1). Amazon, Microsoft and IBM all announced they would stop, or pause, offering facial recognition technology to law enforcement because it was unreliable to use in high stakes work (Larry Magid, ‘IBM, Microsoft and Amazon Not Letting Police Use Their Facial Recognition Technology’ Forbes (Article, 12 June 2020)).

To suggest that autonomous weapons would mitigate the risk of human error is fallacious and steeped in automation bias. The use of lethal autonomous weapons systems in active combat zones would simply replace the risk of human error with the much more disturbing, and entrenched, errors associated with facial recognition technologies and AI.

Moral imperative

It is a moral imperative that lethal autonomous weapons systems be prohibited in warfare due to the unprecedented risk to human life and dignity.

Fully autonomous weapons cross the threshold of acceptability in their present format and must be immediately regulated. We must all do more to add our voices and expertise to ensure autonomous weapons are tightly regulated and prohibited.

The rapid development and deployment of lethal autonomous weapons systems blurs the boundary between fiction and fact. However, the use of these weapons to attack civilians and combatants in the Ukraine-Russia war are very real.

Killer robots are now operating in active combat and it is a moral imperative that we do everything in our power to ensure they are banned not only in the Ukraine-Russia war, but all future wars.

Whatever the outcome, the Ukraine-Russia war will have an extraordinary impact on not just geopolitics but the conduct of warfare. The ramifications of this war will reverberate in wars to come, as aggressors across the globe are witnessing a new way to conduct warfare – and not for the better.



Lorraine Finlay
is Human Rights Commissioner and Patrick Hooton is Human Rights Advisor (Business and Technology) at the Australian Human Rights Commission.