Lethal autonomous weapon systems: an Armageddon the U.S. ignores

Unmanned+aircraft%2C+like+this+Predator+drone%2C+are+becoming+more+and+more+utilized+in+modern+warfare.+Soon%2C+AI-powered+drones+and+weapons%2C+called+lethal+autonomous+weapons+systems+%28LAWS%29%2C+may+begin+to+be+deployed+into+combat.

CC/Jim Howard

Unmanned aircraft, like this Predator drone, are becoming more and more utilized in modern warfare. Soon, AI-powered drones and weapons, called lethal autonomous weapons systems (LAWS), may begin to be deployed into combat.

Laken Kincaid, Managing Editor

Undoubtedly, the fear of mutually assured destruction looms over the heads of many governments today. With conflicts like the Russia-Ukraine war, one can not help but wonder what new method of technology will be used to both kill and fearmonger society. While this answer has looked a lot like nuclear weapons since their advent in the 1940s, the new tech on the horizon presents a possibly larger threat to humanity as a whole. No, this does not take the form of a stronger atom bomb made of complex chemicals and seeds of hatred, but rather as something reminiscent of what is currently at our fingertips and all around us. 

Today, artificial intelligence (A.I.) has evolved into a threat that many disregard for the sake of familiarity. Before the everyday use of touchscreen cell phones and laptops, people feared the next steps of technology, ergo why the film industry created movie mogul giants like “Robocop” and “2001: A Space Odyssey.” However, whether people realize it or not, animatronic villains like we saw in theaters are close to being a reality and being deployed by the military which should rightfully rekindle fears. Yes, we are very close to seeing “The Terminator” coming to fruition.

This is present in the form of lethal autonomous weapon systems (LAWS) which are drones that are able to identify and kill a target without intervention from a human. For many, they have the loving nickname of “killer robots.” This means that A.I., a system of binary code, can scan a person and kill them without any kind of human approval. Undoubtedly, this idea is a little difficult to comprehend and it feels like an apocalypse.

It may be natural to think robots would be competent, even beneficial by taking out the emotional aspect of human-on-human warfare. Yet that is the entire problem. According to Frank Pasquale, a professor at Brooklyn Law School, algorithms today within LAWS are not capable of distinguishing between combatants and noncombatants; they would confuse a person holding a gun with a person holding an ice cream cone. That is why you are forced to identify traffic lights or crosswalks in CAPTCHAs. It is a great test to find A.I. because there are no algorithms to truly identify qualities that require a human’s societal knowledge. 

For example, you can not describe a cat with just numbers; you have to use terms like fur or claws. A computer does not understand those terms because it does not have the human experiences associated with learning them. You would know I am describing a cat because you have seen one before and you are familiar with the concepts of fur and claws. However, with something that has no experience with texture or sound or any form of life, how do you do it? Currently, it is impossible. As Pasquale says, “By ruling out the possibility of combat, the drone destroys the very possibility of any clear differentiation between combatants and noncombatants”. 

This makes it all the easier to limit these programs, as Pasquale says, to discriminatory weapons that target based on skin color or region rather than on behavior. It would be all too easy to input racist descriptors and allow civilians to be caught in the crossfire (and it could very well be unable to do anything else). 

Pasquale also says that a looming threat is that these A.I.s can spill over from warfare to domestic use. He states that “Once deployed in distant battles and occupations, military methods tend to find a way back to the home front. They are first deployed against unpopular or relatively powerless minorities and then spread to other groups. U.S. Department of Homeland Security officials has gifted local police departments with tanks and armor. Sheriffs will be even more enthusiastic for A.I.-driven targeting and threat assessment.”

Undoubtedly, this is a huge problem. Researcher Toby Walsh from the University of New South Wales-Sydney even says that these weapons can be more devastating than nuclear bombs. Not only do they cause unnecessary death abroad, but they will soon bleed over into U.S. streets. 

The natural answer to this problem is to stop the progression of these weapons; they seem to have no natural benefit. However, according to the Congressional Research Service, there is no U.S. nor overarching international policy that prohibits the development or deployment of these weapons. Turkey has even already had a kill with these drones. While the concern was brought to the United Nations last December, both the United States and Russia refused to agree to full restrictions on LAWS. Now, it seems unlikely that any nation will make peaceful policies amidst the conflicts in Ukraine. By the time the war is over, it may be too late and the post soviet states could be using these weapons against all of their enemies without hesitation.

The one hope to solve this, at this point in time, is to pose an international treaty. Currently, the Convention on Certain Conventional Weapons (CCW) treaty from 1995 regarding blinding laser weapons is still being upheld by Russia even as it looks at tactical nukes. Jen Ziemke, a John Carroll professor of political science with a focus on foreign affairs, says that the closest Russia has gotten to breaking this is deploying satellites, not for the purpose of blindness, but rather as “just part of space-to-space combat.

“It is one of the many reasons why the layer of satellites is increasingly in the domain of conflict, and why there is a new combatant command called the Space Force,” Ziemke continued.

Yet, the time for treaties may be running out. As conflicts grow and leaders like Vladimir Putin become more irritable, previous agreements like this may become null and void leaving future settlements as vacuous. The United States missed the opportunity to both regulate and educate about this AI and now may face the consequences in the future. Now, due to inaction, we may face this 1950s sci-fi as a horrid reality.

“[W]e are moving toward an ever greater outsourcing of war to things that cannot protest, cannot vote with their feet (or wings), and for whom there is no ‘home front’ or even a home at all,” Ian Shaw, a professor at the University of Glasgow, said. “The risk here is that democracy abstains from the act of killing: on the loop, but no longer in the loop. An empire of indifference fought by legions of imperial robots.”