Friday, April 16, 2021

AI Use in Military Technology and the Attribution of Fault

 

Technological innovation has the ability to revolutionize military affairs. While such technological innovations are few and far between, the adoption of advanced artificial intelligence (AI) systems could very well be one such innovation. Currently, limited AI technology is used across all sectors of military affairs, finding particular success in its ability to assist in mass surveillance and analysis of counter insurgency intelligence. However, as AI technologies find their way into a greater number of weapons systems, concern around their efficacy grows. This concern prompts us to ask; when AI inevitably fails on the battlefield resulting in an unintended death or injury, who do we hold responsible and how?


            The answer to this question ultimately depends on the type of system employed by the technology in question. Human-In-The-Loop (HITL) machines are ones that operate semi-autonomously, requiring human interaction to do so. Human-Out-of-The-Loop (HOTL) machines are systems that make decisions without any human interaction, or regardless of human interaction. An example of such is a self-driving vehicle that shuts down if a “low-oil” sensor is triggered. This shut down occurs because the sensor was triggered and does not allow any human input to suggest otherwise. The benefits of HOTL systems are speed and removal of human error. 


            If AI systems were limited to these two types of systems, the responsibility of injury caused by AI would be easy to attribute. Injury or death caused by HITL systems would likely put the specific human in the loop at risk of punishment under jurisdiction of the Uniform Code of Military Justice (UCMJ), which covers situations ranging from premeditated murder (Article 118) to negligent discharge of a firearm (Article 134). Conversely, In the event that injury or death results from the malfunction of machine employing a HOTL system, traditional product liability law is appropriate. 


            However, advocates for AI use in military technology are arguing for the widespread use of a third system referred to as Human-On-The-Loop. This type of system blends the benefits of both HITL and HOTL AI technology. Ideally, Human-On-The-Loop systems capture the speed of HOTL technology, but also prevent the errors seen by their use. In a Human-On-The-Loop model, the machine operates continuously, making decisions without human input while under the observation of a human. Imagine a missile defense system that identifies and eliminates incoming enemy missiles but allows for human intervention in the event that there is an incoming friendly aircraft that the machine registers as a missile. Harken back to the self-driving vehicle that shuts down if the “low-oil” sensor is triggered. In a Human-On-The-Loop system the vehicle operator would be able to prevent the vehicle from shutting off if the human deemed the oil level sufficient to continue operation.


            The attribution of fault is not as clear cut when dealing with Human-On-The-Loop systems. In the event that an autonomous system such as the missile-defense system described above fails and unintendedly injures or kills an ally, who is at fault? Do we hold the manufacturer responsible if someone is supposedly observing the system in an attempt to prevent error? Do we hold that individual responsible for failing to prevent machine error? It is these questions that we must concern ourselves with as the application of AI in military technology moves towards Human-On-The-Loop systems. To prevent military personnel from being held liable for AI failures there must be training programs that sufficiently shift the burden of responsibility back to the manufacturer. It would be unadvisable to purchase and employ technology using Human-On-The-Loop AI systems without ensuring that adequate training is sufficient to protect the military from undue liability. This can be achieved by requiring manufactures of AI technology to embrace the role of Human-On-The-Loop training and qualification.

No comments: