Defense Statecraft

Wednesday, March 14, 2018

Military Robotic Race and the Driving Forces

Today, "robots" are used across the US military (but not limited to the US) for a plethora of reasons-EOD, transportation, combat, scouting, reconnaissance-the list goes on, and widens if you start to consider air, sea, and land use. The market for this has been growing steadily, and is expected to be worth close to $31 billion USD by the early 2020's. As of 2017, the military robotics market was pegged just under $17 billion USD. If the estimates are true, this is a remarkable leap in only half a decade's time. This growth has two main causes: events of terrorism, and the need between all nations to created unmanned systems.

Developing countries (India and China are the largest) are building larges forces of these robotic tools and systems, and the competition and race is growing. Not only are countries both big and small adding these robotic systems to augment their forces, they are investing heavily in the technology behind it. Focusing on this new automated technology and manufacturing a broad array of unmanned robotic systems (designed for land, sea, air, and space) adds to the competition of the race. Europe and the Asia Pacific region will be at the lead of this charge; the Europeans and their military robotic markets are projected to grow the highest and the quickest during the next five years. This can be explained by the ever increasing R&D activates undertaken across Europe as a whole.

The Asia Pacific region, however, have increased defense spending and bigger military budgets as the main factor of their growth in this new market. This is also strongly intertwined with the geopolitical dynamics of the area as a whole; measures taken by the emerging and booming economies of India and China are primarily for the enhancement of their military and their capabilities, and to be able to have the competitive edge against their competitors in their regions. However, both the European countries and the Asia Pacific region are focusing more on developing and deploying unmanned systems to assist in their military applications.

The US is right on top in this market (many of the huge firms are US based: Lockheed Martin, Northrup Grumman Corporation, and Boeing), but it also coordinates and partners with many others nations who are in in the race. For example, India recently just agreed with the US to purchase 22 armed drones for their military. Indeed, no small part of the expected growth in those areas is due to the US companies direct involvement with the technology. With more and more emphasis being put on unmanned robotic systems, devices, and equipment, and nations become more and more reliant on them, this projection of growth is most likely accurate, and will not slow down. The US and US allies need to continue to be at the forefront of this to maintain their competitive edge over the other nations int he world; the US needs to continue with this R&D to maintain their spot as the world leader and global superpower.

Additional Sources:


Friday, March 09, 2018

Pray for Captain America, Prepare for Red Skull

Popular culture is chock-full of depictions of experiments performed on soldiers to enhance their performance and make them less vulnerable on the battlefield or otherwise. So-called "super soldiers" like Riley Finn in Buffy the Vampire Slayer or Captain America and Red Skull from the Marvel Universe, all represent the results of military experiments to create the perfect human or human hybrid. Injecting someone with a serum or drug to make them more powerful or trying to create a human with the abilities of a supernatural being might sound like something that only exists in comic books but it is something the US military has been trying to do for decades, in one way or another. Militaries have been attempting to create super soldiers to gain the strategic advantage for centuries, dating back to the Inca warriors in pre-modern times. The goal is to create a more effective, powerful, heroic force like you see in Captain America for example but to do so, you run the risk of creating something dangerous with serious and long-lasting implications like Red Skull.

Since the beginning of warfare, soldiers have been using drugs to enhance their military capabilities.  Civil War soldiers used morphine and German soldiers in WWII used crystal meth, a habit encouraged to dehumanize soldiers, make it easier to kill, and even combat stress. The advancement of technology has changed this approach but with similar goals. Since 1990, DARPRA, the central research and development organization of the Department of Defense, turned its focus to creating a new kind of soldier. Exoskeletons are one example of that focus but there is also the push to go beyond that to lessen the effects of fear and fatigue on soldiers. During his time as Director of the Defense Science Office (DSO), a department within DARPA, Michael Goldblatt went as far as to hire a biotechnology firm to develop a vaccination that would reduce pain so soldiers could continue to fight regardless of injury. In a program called the Brain-Machine Interface, the DSO explored the possibly of brain implants to enhance cognitive ability and possibly lead to telekinesis.

There are several other experiments intended to enhance soldier performance that have been declassified and there are undoubtedly countless others that remain a secret. The effort to keep soldiers safer while making them more effective is an understandably salient project but is stripping them of their humanity the answer and where do you draw the line? Quite often, the knowledge of when something has gone too far is what separates a hero from a villain.

Thursday, March 08, 2018

A Robot Fly on the Wall

From Rosie the Robot on the Jetsons to Arnold Schwarzenegger's Terminator, humans have been fascinated with autonomous robots for decades. The more lifelike the more intriguing, and science fiction is all about futuristic robotics. Due to developing technologies the idea of these human like robots are becoming much more science and less fiction. Robots can be specifically designed to do a number of particular tasks. Of late many developers are spending more money on designing robots to perform military tasks. This has caused concern for a number of skeptics who are fearful of the capabilities of “killer robots.” Several skeptics have reached out to the United Nations requesting a ban on the development of these killer robots. Perhaps it is only a matter of time before robots work as autonomous soldiers. But in the meantime it is important to highlight other reasons for robotic developments in the military arena.

One of the more fascinating robots on the market is the Robobee. No larger than a piece of pocket change the Robobee is designed after the biological makeup of bees. Researchers at the Wyss Institute are developing these miniature robots to perform a number of tasks related to disaster relief, agriculture, reconnaissance, among others. Originally designed to help with bee species becoming extinct, these micro-machines are being considered for much larger feats.
The capabilities of the tiny machines are possibly endless. Already designed to be able to fly, hover, perch, and swim underwater these insect-like robots can go virtually anywhere.

Exclusive rights to the Robobee remain with Harvard’s Wyss Institute. But it is likely to see these miniature robots being sold to military organizations and corporations around the globe. Equipping the Robobees with cameras and/or listening devices will allow organizations to send in spies to collect espionage unnoticed. If perfected this could give intelligence collecting organizations an upper hand. Drones are already used as robot intel collectors, but they are large and visible from a distance. If used effectively the Robobee is essentially invisible to the naked eye. A timeline of Robobee usage for espionage and reconnaissance missions remains unclear but it is possible in the near future to have robot flies on the wall.

For more on the Robobee visit:

Rise of the Robot Army

Robot soldiers, the appeal is strong within military planners and personnel. They could replace soldiers in dangerous missions, preventing loss of life. They would make less errors, because they do not get hungry, tired, angry, or scared. Robots act as a force multiplier, throwing bodies at the opponent to force submission. They are faster at making decisions and are better at following commands. Plus, they are just plain cool to have.

Robot soldiers would remove the human element of war. World War II casualty rates would never be repeated! What could the downside be??

Well, will these robot soldiers be able to recognize enemy troops and friendly troops on the battlefield? If they are trained to target threats, what would stop them from targeting one of our own in the heat of battle? Down this same path of thought, would they target civilians? I doubt even developers of robot soldiers would know the answer to these problems until the introduction of robots into actual wars. Learning these lessons with boots on the ground would be terrifying. 

Additionally, even if the AI is programmed to correctly identify the differences between friendly troops, enemy troops, and civilians, would programmers be able to imagine every battlefield scenario they would be put in? Even if they successfully program a robot to become a soldier, the robot would only be able to be used in scenarios they were programmed for. The closest comparison I could make would be like sending WWII trained American troops into Vietnam and trying to fight the same war there despite condition and tactical changes. Again, even on the ground, if a robot encounters a situation in which they were not programmed for, they may handle it poorly or just shutdown, proving a major concern during war. 

With cyberattacks on the rise from nation-state-threat-actors, robots could be hacked into during battle. In that way, soldiers could be removed from battle without a single casualty from the enemy. Or, if they were not removed, hackers could simply use data from the robots as key intelligence. GPS location, sensors, cyber-espionage are all potential issues a robot army could face. 

Finally, the potentially biggest issue with robots I see: they would cost a lot of money. Would the creation of a robot army just incite a robot arms race until economies collapse? Would wars be likelier with less tech savvy states? All of these issues seem highly likely and for that reason, the world is no where near the point in which a robot army is feasible for war purposes. 

Self-Regulating Communities Writing Self-Regulating Algorithms

            Autonomous weapons systems (AWS) open the door to “really neat operations,” that the U.S. military hasn’t necessarily been able to do before, according to Major Jen Snow, of SOFWERX.  Major Snow, associated with the Donovan Group at SOFWERX, builds relationships between Special Operations Command (SOCOM) and what she calls “self-regulating communities” of “makers and hackers.”  SOFWERX runs outreach to inventors and innovators to try to glean advantages in the use of technology on the battlefield, offering prizes and exposure for makers and hackers at their rapid-prototyping expos.  Engineers, hackers, inventors, and scientists fill SOFWERX’s Florida headquarters offering new ways for soldiers to eliminate or use unmanned aerial systems (UAS), detect infrared, hack enemy systems, and so much more.

            The outreach that SOFWERX is engaged in highlights the pressure that the U.S. military is feeling about staying ahead of technological developments on the battlefield, and SOCOM isn’t the only organization outsourcing.  The Defense Advanced Research Projects Agency (DARPA) also issues grants to industry and academic teams working in advanced technology.  In recent years, DARPA has focused more and more on deep learning, a form of Artificial Intelligence that writes its own code to learn, to augment America’s warfighting capabilities.  This brand of Artificial Intelligence is exponentially more effective than the comparatively transparent algorithms that don’t teach themselves, but its OODA loop takes place within a black box.

            To clarify, the U.S. military’s plan to develop AWS is to rely on self-regulating communities to develop self-regulating algorithms that will be tasked with “really neat operations,” that the U.S. military might feel uncomfortable risking human soldiers conducting.  Furthermore, not all of the ideas coming out of those self-regulating communities of makers and hackers will be selected for military use, but they will be ready for quick sale to basically anyone else.  So, ultimately, humans won’t be able to know why our AWS make decisions, and the military won’t fully control who or how AWS are developed.  That’s not just a recipe for a lack of meaningful human control—that’s a recipe for a lack of meaningful state control.