Hyperwar

DMzseLAUMAEHDil

In the March 2nd edition of the Wall Street Journal, Julian Barnes and Josh Chin announced the dawn of a new arms race breaking over the increasingly chaotic geopolitical arena: the competitive pursuit of artificial intelligence and related technologies. At the present moment, the United States leads the world in AI research, but with the emergence of a “Darpa with Chinese Characteristics” the mad dash is on. And behind the US and China is Russia, hoping that within the next ten years to have “30% of its military robotized” – a path that neatly compliments the country’s burgeoning efficiency in non-standard netwar.

At the horizon, Barnes and Chin suggest, is a new speed-driven, technocentric mode of conflict that has been granted the qabbalistically-suggestive name of “hyperwar”:

AI could speed up warfare to a point where unassisted humans can’t keep up—a scenario that retired U.S. Marine Gen. John Allen calls “hyperwar.” In a report released last year, he urged the North Atlantic Treaty Organization to step up its investments in AI, including creating a center to study hyperwar and a European Darpa, particularly to counter the Russian effort.

The report in question unpacks hyperwar further:

Hyper war… will place unique requirements on defence architectures and the high-tech industrial base if the Alliance is to preserve an adequate deterrence and defence posture, let alone maintain a comparative advantage over peer competitors. Artificial Intelligence, deep learning, machine learning, computer vision, neuro-linguistic programming, virtual reality and augmented reality are all part of the future battlespace. They are all underpinned by potential advances in quantum computing that will create a conflict environment in which the decision-action loop will compress dramatically from days and hours to minutes and seconds…or even less. This development will perhaps witness the most revolutionary changes in conflict since the advent of atomic weaponry and in military technology since the 1906 launch of HMS Dreadnought. The United States is moving sharply in this direction in order to compete with similar investments being made by Russia and China, which has itself committed to a spending plan on artificial intelligence that far outstrips all the other players in this arena, including the United States. However, with the Canadian and European Allies lagging someway behind, there is now the potential for yet another dangerous technological gap within the Alliance to open up, in turn undermining NATO’s political cohesion and military interoperability.

“[A] conflict environment in which the decision-action loop will compress dramatically from days and hours to minutes and seconds… or even less.” Let those words sink in for a moment, and consider this hastily-assembled principle: attempts to manage the speed-effects of technological development through technological means result in more and greater speed-effects. James Beniger’s The Control Revolution: Technological and Economic Origins of the Information Society is the great compendium of historical case studies of this phenomenon in operation, tracing out a series of snaking, non-linear pathways in which technological innovation delivers a chaos that demands some of form quelling, often in the form of standards, increased visibility of operations, better methods of coordination, etc. These chaos-combating protocols become, in turn, the infrastructure of further expansion, more technological development, greater economic growth – and in this entanglement, things get faster.

Beniger’s argument is that this dynamic laid the groundwork for the information revolution, with information theory, communication theory, cybernetics, and the like all emerging from managerial discourses as ways to navigate unpredictability of modernity. We need no great summary of the effects of this particular revolution, with its space-time compression, unending cycles of events, the breakdown of discernibility between the true and the false, the rise tide of raw information that threatens to swamp us and eclipse our cognition.

Where this path of inquiry leads is to the recognition that modernity is being dragged, kicking and screaming, into the maw of the accelerationist trolley problem: catastrophe is barreling forward, and the possibly space for decision-making is evaporating just as quickly. There simply isn’t enough time.

Even in the basic, preliminary foreshadows of the problem, command-and-control systems tend to find themselves submerged and incapacitated. Diagramming decision-making and adjusting the role of the human in that diagram is the foremost response (and one completely flush with the assessment drawn from Beniger sketched out briefly above). First-order cybernetics accomplished this by drawing out the position of the human agent within the feedback loops of the system in question and better integrating the decision-making capacity of the agent in line with these processes. From Norbert Wiener’s AA predictor to the SAGE computer system to Operation Igloo White in Vietnam, this not only blurred the human-machine boundary but laid the groundwork for the impending removal outright of the human agent from the loop.

tote

Consider the TOTE model of human behavior, which imported perfectly the fundamental loop of first order cybernetics into the nascent field of cognitive psychology. TOTE: test-operate-test-exit. Goal-seeking behavior in this model follows a basic process of testing the alignment of an operation’s effect with the goal, and adjusting in kind. But consider two systems whose goals are to win out over the other one, each following the TOTE model in relation to the respective actions of each. The decisions made in one system impact the decisions made in the other, veering the entanglement of the two away from anything resembling homeostasis. Add in the variables of speed, the impossibility of achieving total information awareness in the environment, and the hard cognitive limits of the human agent gets us to the position where the role of the human in the loop becomes a liability. But it’s not just the human, as the US military learned in Vietnam: the entire infrastructure, even with the aid of the cybernetic toolkit, falls victim to the information bottlenecks, decision-making paralysis, and the fog of war. The crushing necessity of better, more efficient tools is revealed in the aftermath – but this, of course, will deepen the problem as it unfolds along the line of time.

Enter the John Boyd’s OODA loop. As with the trajectory of Wiener’s thought, Boyd’s theory was first drawn from the study of aviation combat and radiated outwards from there. OODA stands for observation-orientation-decision-action, and like the TOTE model it emphasized cognitive behavior in decision-making as a series of loops. Observation entails the absorption of environmental information by the agent or system, which is processed in the orientation phase to provide context and a range of operational possibilities to choose from. Decision is the choice of an operational possibility, which is then executed as an action. This returns the agent or system to the observation phase, and the process repeats.

 

Screenshot from 2018-03-06 14-57-17

This might look at first blush like the linear loop of first order cybernetics and the TOTE model, but as Antoine Bousquet argues this is not so:

A closer look at the diagram of the OODA “loop” reveals that orientation actually exerts “implicit guidance and control” over the observation and action phases as well as shaping the decision phase. Furthermore, “the entire ‘loop’ (not just orientation) is an ongoing many-sided implicit cross referencing process of projection, empathy, correlation, and rejection” in which all elements of the “loop” are simultaneous active. In this sense, the OODA “loop” is not truly a cycle and is presented sequentially only for convenience of exposition (hence the scare quotes around “loop”).

Early cybernetic approaches to conflict battlespace insisted achieving a full-scale view of all the variables in play – a complete worldview through which the loops would proceed linearly. It was, in other words, a flattened notion of learning. Boyd, by contrast, insists on the impossibility of achieving such a vantage point. Cognitive behavior, both inside and outside the battlespace, is forever being pummeled by an intrinsically incomplete understanding of the world. In first-order cybernetics, the need for total information awareness raised the specter of a Manichean conflict between signal and noise, with noise being the factor that impinges on the smooth transmission of the information (and thus breaks down the durability of the feedback loop executing and testing the operation). For Boyd this is reversed: passage through the world partially blind, besieged by noise, makes the ‘loop’ a process of continual adaptation through encounter with novelty – a dynamism that he describes, echoing Schumpeter’s famous description of capitalism’s constant drive to technoeconomic development, as cycles of destruction and creation:

When we begin to turn inward and use the new concept—within its own pattern of ideas and interactions—to produce a finer grain match with observed reality we note that the new concept and its match-up with observed reality begins to self-destruct just as before. Accordingly, the dialectic cycle of destruction and creation begins to repeat itself once again. In other words, as suggested by Godel’s Proof of Incompleteness, we imply that the process of Structure, Unstructure, Restructure, Unstructure, Restructure is repeated endlessly in moving to higher and broader levels of elaboration. In this unfolding drama, the alternating cycle of entropy increase toward more and more dis-order and the entropy decrease toward more and more order appears to be one part of a control mechanism that literally seems to drive and regulate this alternating cycle of destruction and creation toward higher and broader levels of elaboration.

What Boyd is describing, then, isn’t simply learning, but the process of learning to learn. For the individual agent and complex system alike, this is the continual re-assessment of reality following the (vital) trauma of ontological crisis – or, in other words, a continual optimization for intelligence, a competitive pursuit of more effective, more efficient means of expanding itself. It is for this reason that Grant Hammond, a professor at the Air War College, finds in Boyd’s OODA ‘loop’ a model of life itself, “that process of seeking harmony with one’s environment, growing, interacting with others, adapting, isolating oneself when necessary, winning, siring offspring, losing, contributing what one can, learning, and ultimately dying.” Tug on that thread a bit and the operations of a complex, emergent system begin to look rather uncanny – or is it the learning-to-learn carried out by the human agent that begins to look like the uncanny thing?

Back to hyperwar.

For Boyd, the dynamics of a given OODA ‘loop’ are the same as the scenario detailed above about the two competing TOTE systems that lock-in to speed-driven (and driving) escalation. Whichever loop evolves better and faster wins – and in the context of highly non-linear, borderless, technologically-integrated warfare, the unreliability of the human agent remains central as the key element to be overcome. Hence hyperwar, as General John Allen makes clear by trying to get a grip on the accelerationist trolley problem:

In military terms, hyperwar may be redefined as a type of conflict where human decision making is almost entirely absent from the observe-orient-decide-act (OODA) loop. As a consequence, the time associated with an OODA cycle will be reduced to near-instantaneous responses. The implications of these developments are many and game changing.

Allen suggests here that there is still some capacity for human decision-making in the hyperwar version of the ‘loop’ – but as he points out in the elsewhere, the US’s military competitors (namely: China) are not likely to feel “particularly constrained” about the usage of totally autonomous AI. A China that doesn’t feel constrained will entail, inevitably, a US that will re-evaluate this position, and it is at this point that things get truly weird. If escalating decision-making and behavior through OODA ‘loop’ competition is an evolutionary model of learning-to-learn, then the intelligence optimization that is, by extension, unfolding through hyperwar will be carried out at a continuous, near-instant rate. At that level the whole notion of combat is eclipsed into a singularity that is completely alien to the human observer that, even in the pre-hyperwar phase of history, has become lost in the labyrinth. War, like the forces of capital, automates and autonomizes and becomes like a life unto itself.

taken from here

Nach oben scrollen