Lethal Autonomous Weaponry: Fulfilment of Pop Culture Prophesies?

December 6, 2018

‘I can’t fight this brain conditioning

Our freedom’s just a loan

Run by machines and drones

They’ve got us locked into their sights

Soon they’ll control what’s left inside’

Muse – ‘Revolt’, Drones

From the above passage, it is clear that Lethal Autonomous Weapons Systems (LAWS) have made their mark on popular culture, with Muse’s album not just presenting a world overrun and controlled by drones, but also telling the story of a human drone who will kill on command.

Charlie Brooker’s controversial Netflix series Black Mirror tells a similar story in ‘Men Against Fire’, with soldiers conditioned to see enemies as mutated zombie-like figures, consequently driving them to kill innocent human beings.  In a later episode, ‘Hated in the Nation’, robotic artificial intelligence (AI) bees fitted with facial recognition technology are hacked to perform mass genocide.

Though at first these examples may seem provocative plotlines created for enjoyment, they do highlight some of the crucial fears that contemporary society has of autonomous weapons.  Namely, we fear just how autonomous these weapons will be – can they near human consciousness?  Moreover, we fear the idea of a weapon that can kill for itself without conscience or thought.

We fear just how autonomous these weapons will be – can they near human consciousness? 

One popular online video, ‘Slaughterbots’ made by Stop Autonomous Weapons, was even shown at the UN CCW Group of Governmental Experts meeting in Geneva to discuss a ban on lethal autonomous weapons, and was created with the support of Professor Stuart Russell, a professor of Computer Science at University of Berkeley.  The video presented technologies that we already possess to show the scary capabilities of AI, including their use to target ideologies and political figures.

But before we immediately fear these weapons systems, we must define exactly what Lethal Autonomous Weapons are. While the United Nations is still working to develop a defining framework, LAWS are loosely defined as autonomous military robots, which can search and engage targets based on programmed restrictions. These robots can range from simple drones that search targets, to ‘killer robots’ that search and destroy based on orders or characteristics.

Popular culture is not the only source of opposition to such weaponry – as of 13 April 2018, 26 nations support a ban on LAWs, including China, Austria and Colombia. Five nations, however, explicitly rejected the call to negotiate international regulations of LAWs.

The international system is built upon the notion of cooperation. From this perspective, the continued pursuit of LAWS technology puts the international community at danger.

Why would these nations support LAWS?  Firstly, many see it as a chance to remove human casualties, with the US Department of Defence (D.O.D.) claiming that robots are better suited to ‘dull, dirty or dangerous’ missions.  By ‘dirty’, the D.O.D. means missions involving toxic waste or radioactivity, while ‘dangerous missions’ likely refers to travelling through minefields or comparably dangerous terrains.

Some cite cost benefits, with David Francis claiming in a 2013 article for The Fiscal Times that ‘each soldier in Afghanistan costs the Pentagon roughly $850,000 per year’, while ‘the TALON robot costs $230,000’. During operations in Afghanistan and Iraq, the US Defence Department obtained more than 7,000 robots, 2,500 of which will be kept.  However, the Pentagon is expected to put $1 billion into the amassing of more LAWs over the next few years, and it is likely that as technology advances there will be further spending on more intricate technologies and larger quantities of it. Another aspect that should be considered is the potential for job losses, as many army missions become unnecessary when a robot would be in less danger. Whether LAWS have benefits in terms of spending is therefore questionable.

Perhaps then, the continuing support of LAWS is to ensure that no country has more than the other. We could see the persistence of France, Israel, Russia, the UK and the USA to support LAWS as a result of the fear of Mutually Assured Destruction, with each country hoping to ensure its own safety in the case of another country’s technological advancement. This may be understandable from the perspective of individual states, but it is not justifiable from a global perspective. On a practical note, many nations do not have the economic resources to pursue such technology. Possibly more pressingly, however, the international system is built upon the notion of cooperation. From this perspective, the continued pursuit of LAWS technology puts the international community at danger.

Will the pop-culture prophesies have to come true before the international community comes to their senses?

Arguably, disadvantages outweigh their arguments. As popular videos have highlighted, LAWS in the wrong hands could be disastrous. AI technology capable of facial recognition could easily be engineered to political assassination, and their capability of stealth makes them an enemy that is incredibly difficult to stop. In the hands of terrorists, its powers of destruction are horrific.

There is also the classic argument against AI of ‘how autonomous is autonomous’? Will AI reach a point of consciousness and resist instruction, and is equipping it with weapons simply inviting trouble? Even if we do not face a science-fiction style robot revolution, can we trust autonomous technology? Facial recognition is not faultless, nor is machinery in general – if a LAWS had a minor fault either in its recognition or its weaponry, it could be liable to friendly-fire, or the killing of innocent civilians. If this were to happen, who would face the consequences?  Do we accuse the developers, the army that purchases it, or the government that funded said army?

It is easy to see how LAWS are not only dangerous to human beings, but to governmental structures which put themselves and other nations in a threatened position in continuing to allow this technological development. Moreover, during the April 2018 UN Convention on Certain Conventional Weapons meeting, it was addressed on multiple occasions that there is already material in international and disarmament law to ban autonomous weapons systems. Currently, it seems as though LAWS will continue to be developed until it is too late, as many nations do not yet see their danger. Will the pop-culture prophesies have to come true before the international community comes to their senses?

– Jennifer Roberts

Image by Defence Images on flickr.com

Advertisement