Wednesday, March 11, 2020

Outlaw Killer Robots?




In July of 2018 2400 researchers, among those Demis Hassabis at Google and Elon Musk of SpaceX signed a pledge to not pursue the development of killer robots. This pledge states that the participants will “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.” However, are weapons powered solely by AI really all that terrible?

The argument has been made that weapons powered by artificial intelligence would be not only less prone to human error, but also more effective. This is because of deep learning. Deep learning is a certain ability that some AI have that lets them form networks on their own. In essence, it learns to do tasks by itself. A drone powered solely by AI, without a human operator, would not have the problem of a blurry screen or any of the other problems that a human operator may have. There wouldn’t have to be a human manning a missile defense system and that person’s efforts could be spent elsewhere. In this way, AI weaponry would be vastly superior to its human counterpart.  

The most obvious downside of weapons that can shoot by themselves is that they are prone to hacking attempts by foreign entities. On a certain day in July of 2014, the United States was the target of hackers a total of 5840 times. If AI weaponry is developed, we can expect a certain number of these attacks daily to be targeted at these weapons. Of course, we can have a nigh impenetrable firewall placed on them, but eventually one will get through and from then the consequences would be dire. Is the United States, or any other country, really willing to risk that?

Another, perhaps less obvious, critique of certain AI weaponry is the fact that even if the human element is dead, the weapon still continues to operate. Do we really want certain regimes or organizations to have weaponry that, when all of the forces are killed, still continues to fire? On top of that, these weapons would be extremely difficult to dismantle or destroy. If the regime or organization doesn’t value human life and just wants to cause as much chaos and destruction as possible before they go down, then these weapons would be perfect for them to use.  

In short, I believe that weaponry powered by artificial intelligence is a good thing to have, but there always needs to be a human element to back it up. Weaponry solely powered by AI is too dangerous for right now. Something like Terminator will, most likely, never happen, but if checks are not placed on AI weaponry before it goes too far, then there may be real problems in the future.

No comments: