Autonomous weapons: decisions to kill or not?

Autonomous weapons: decisions to kill or not?

To kill or not to kill other humans is the main question on debates over law and ethics of Autonomous Weapon Systems (AWS). According to the International Humanitarian Law, taking human lives and destroying objects during armed conflict treated as a lawfulness of an attack. How can legal advisers evaluate who decided to kill if it’s a lethal action between human interaction and weapon system, for example, war drone?

Researchers from Stockton Center for the Study of International Law at the U.S. Naval War College published a research project on a future of AWS. It explores the interaction of artificial intelligence and machine learning with international humanitarian law in autonomous weapon systems. Machine learning, in particular, presents a unique set of issues that challenge traditional concepts of control over weapon systems.

When a soldier decides to use a bayonet against its enemy in the battlefield, the soldier’s decision to kill is made by the blade. Weapons, such as torpedoes, cruise missiles, and air-to-air missiles are still simply and logically traceable to a human’s decision to kill. At the same time imagine an unmanned submarine or war robot that was programmed to independently identify potential targets and was granted the authority to attack the enemy after spending days, weeks or months without human interaction. Then, it’s a question whether a human decision leads to the attack.

We all observe the advancements in AI techniques but when some experts speculate about future AWS equipped with sophisticated AI, they ascribe decisions to machines. Even advanced machines do not decide anything in a human way. It means that an unmanned submarine, which was granted an authority to attack enemies, engage targets without human intervention – it has not made a decision. AWS is programmed by humans to achieve a certain goal.

The death toll and level of destruction can’t be predicted if a decision to kill will be functionally delegated to a machine. Basically the process of functional delegation doesn’t mean that machines are making decisions. According to the International Humanitarian Law, humans decide who dies by creating an unpredictable and autonomous weapon, which can be indiscriminate in multiple ways. This does not necessarily mean that humans must provide input to future AWS at a point that is temporally proximate with lethal action in order for it to comply with IHL.

Let’s explore another example of the blanket requirement of human input is unnecessary from an IHL perspective. Now anti-tank mines may remain in place for an extended period without activating. In future, with a help of new technological advancements and algorithms, this type of weapons can be equipped with a positively identify valid military objectives. The determination will depend on the specific authorities and capabilities granted to the AWS. If the lethal consequences of an AWS’ actions are unpredictable, the decision to kill may have been unlawfully delegated to a machine.

Future military commanders will need to cope with threats that are too erratic for humans to respond. Here’s an example. The Chinese ministry of defense is working on unmanned autonomous weapon systems that can attack U.S. aircraft carrier group together, as a swarm. The respond should be extremely fast because a fight will be at “machine speed.” In this case, if a commander will rely on human soldiers with the slow response to a command, the battle will be lost very fast.

It means that policy and regulations are needed in order “to evaluate and proactively address risks associated with increasing autonomy in weapon systems, to preserve the law of armed conflicts’ humanitarian protections, and to minimize human suffering and death.” This position is presented by experts Rebecca Crootof and Frauke Renz. Basically, the legal concepts such as those described above must, therefore, be linked more directly to reasonably foreseeable AI and machine learning technology.

Taking information all the information and cases mentioned above, the only multidisciplinary approach can be an answer to all these challenges. Technical experts in computer science have to work together with IHL lawyers and decide how the decision to kill can be delegated to a machine from a technological point of view. The other important point is how mission-type orders can be carried out by AWS, which are self-learning and potentially unpredictable. There should be an exchange of knowledge between lawyers and computer scientists.

‘Wait and see’ approach is not acceptable. National security interests and humanitarian ideals are too important to leave to our future generations.

Author: AI.Business


If you like our articles, please subscribe to our monthly newsletter: 

[mc4wp_form id=”763″]