by Michelangelo Freyrie
Since 2014 much has changed in international politics. The perspectives on future warfare and technologic advancement, however, did not, retaining the same direction described by the US department of defense since the unveiling of the so-called “Third Offset Strategy”. As the risk of other nations catching up to the US capabilities is again a prime concern of its leaders, there are many directions being considered to invest in to maintain the technical superiority in future confrontations: big data analysis, miniaturization and 3D printing are all priorities that have been included in the memo by former secretary of defense Chuck Hagel. But the most striking aspect of the initiative is the one involving further research in “human-machine teaming”, meaning further integration of computer systems in decision making and response. The interest in this application of artificial intelligence obviously attracts much concerns regarding not only the ethics of non-human components in warfare, but also doubts on the best way to pursue the objective. On the other side of the spectrum, sci-fi fantasies also provokes unrealistic expectations on the possibilities that rudimental AIs can be employed at this stage of development – despite the shifts we have already witnessed in civilian applications. But before analyzing the recent history and prospects involving this technology we need to make an obliged stop to 1997, the year computer integration has made its first steps.
The term “Offset Strategy” is used officially to characterize the capabilities of the U.S. military in comparison to possible opponents. Historically, the First Offset Strategy refers to the advantages the US had in the 1950s in terms of nuclear weapons, while the Second Offset Strategy focused on “intelligence, surveillance, and reconnaissance (ISR) platforms, improvements in precision-guided weapons, stealth technology, and space-based military communications and navigation.”
Arguably being able to see the bigger picture is an indispensable requirement to become a chess grandmaster, and the greatest of them all can’t lack a certain level of strategic outlook. When Garry Kasparov was first defeated by Deep Blue on May 11th 1997 the global public opinion rapidly jumped on the story framing it as the first defeat of humanity against the powerful processing power of computers. The chess player turned activist was however wildly unimpressed by the performance of his adversary: “It was an impressive achievement, of course, and a human achievement by the members of the IBM team, but Deep Blue was only intelligent the way your programmable alarm clock is intelligent. Not that losing to a $10 million alarm clock made me feel any better.” Lacking any creativity and capacity to learn from its errors, the machine had overpowered the man through the sheer mass of moves it could evaluate in few minutes. But what if creativity and computational force could be both used? Kasparov vs Deep Blue relaunched a notion that had been floating around in literature since the 1970s, that of merging of the human component with computational brute force to spawn a new form of play, fancily named Advanced or Centaur Chess.
While the human player inputs a strategy to defeat the adversary, the computer evaluates the proposed plans and shapes the tactical level of confrontation, taking into the account the outcomes of every move and analyzing the strength of strategy over another. This collaboration allows the player to implement significant strategic plans benefitting from a tactically perfect play. Blunders (bad moves caused by oversights, time troubles and overconfidence) are reduced to a virtual zero thanks to the exponential increase of data gathering and analysis by the team. Chess is hailed as the ultimate strategy game because of the absence of random elements: all what could influence the game is laid out in front of the player’s eyes, who needs to be capable of considering the interaction between set pieces. In the context of warfare, this implies a severe limitation: the fog of war and randomness of many components of reality greatly limit computers, deterministic machines who hardly know how to react to the unexpected. It’s unlikely we will see generals assisted by supercomputers anytime soon, but this doesn’t mean human-machine teaming couldn’t have a significant impact where random parameters are greatly reduced, the tactical level of the battlefield. The streamlining of information processing isn’t anything new, especially in ballistic systems. The US Terminal High Altitude Area Defense (THAAD) already automates three of the four stages of human information: the system recognizes an incoming enemy missile, analyses its trajectory and selects the best action to counteract it, with the operator left to “just” pull the trigger. There are of course differences between choosing among equally legitimate courses of action and the pretty obvious task of downing a missile: it’s not like there are any ways to render it ineffective. But more ambitious experiments have been conducted based on this simple model of information gathering and response. Manned-unmanned teaming (MUM-T) between an AH-64 Apache pilots and drones such as the Gray Eagle and RQ-7B Shadow has been one of the more fascinating experiments by the US Army. The theory behind the concept is simple: the pilots of the Apache, controlling an auxiliary drone from the cockpit of their own helicopter, may benefit from the increased situational awareness provided by the drones’ cameras during ground attacks as well as increased firepower if the UAS (Unmanned Aircraft) is armed. But if the recent deployment in Afghanistan with the 1st-101st Aviation Reconnaissance Battalion has shown anything, then that the adoption of such systems will require a large-scale rethinking of a soldier’s capabilities as well as massive investments in Dual-Mode Cognitive Automation, the distribution of human tasks to Artificial Cognitive Units. Our experience until now only derives from a hierarchic relationship between the pilot and the machine, as for example the automatic stabilization of an aircraft. In case of our Apache example however, an optimal system would need to be capable of adaptive automation, the overtake of control by the machine based on the pilot’s state and workload: the drones would be under his control ex. when requiring a new altitude, or zoom on a particular area, while it would autonomously perform regular operations such as take-off and following the Apache during dispatches. But why not taking this a step further? Minding that we’re entering a realm of possibilities, it wouldn’t be such a stretch to imagine a system capable of assessing the risk of a flight maneuver or landing zone given the available intelligence gathered. Crucially, centaur teaming could achieve a similar level of performance as the blunder-free games of its chess counterpart, or at least streamline the acquisition and sharing of information on every level of operation. In a way, it would be like starting to consider unmanned systems not only the human’s eyes and ears but also its reflexes and muscle memory, multiplying and extending manned capabilities in an age of demographic decline in much of the developed world.
This shift can only come at the expense of rethinking current ideas about the military, both considering its role as ultimate wielder of violence and the men and women composing it. For one, these systems are bound to give the soldiers on the ground far more autonomy regarding the execution of missions, because of the increased awareness in comparison to the higher command echelons and the far greater destructive power single units can potentially unleash (in this article I omitted talking about drones swarms, a further development increased machine autonomy could lead to). This is hardly compatible with a rigid command structure and requires the embracing of the Prussian Auftragstaktik (a command concept in which even the most junior officers were required to make far reaching decisions, with complete liberty in how to achieve operational goals) much championed by General Mattis. Moreover, it would be foolish to think centaur teaming represents an exclusively technical challenge. Sure, the concept requires overcoming different hurdles: electronic defense and encryption, development of sophisticated Intelligent Agents, defense of satellite systems and communication nodes, development of HALE-UAVs (drones capable of sustaining prolonged high-altitude missions without the need of landing), and so on and so forth. Still, centaur teaming requires the human operator to actively lead the electronic component, not only supervising it. Mistrust of computers, overreliance and overall reduced situational awareness beyond the screens are all risks inherent in the use of UAS, and they eerily resemble the blunders machines are supposed to eliminate. In addition, the comparative advantages of centaur teaming often rely in the quality of the human operator. Computers can be copied, people can’t. Future soldiers, but every worker performing jobs involving electronic systems, will need to be remarkably good in non-analytical tasks. Thinking outside the box, generating hypotheses on the correlations worked out by machines and strategizing will be their jobs, and to stimulate these capabilities it is not sufficient to rethink military training, but an entire education system. The only advantage an army will ever be able to maintain in the long term is not technologic, but human. In advanced chess, it is always the best player who wins.