Logic and the Politics of Prevention

In general, prevention is immanent to a specific time schedule. If the present future, which refers to what is expected of the future, and the future present, which designates that future that will actually occur, are not congruent, then, if one looks at it economically at first, through the use of performative mathematical calculation methods and derivatives (present future), a future present always becomes real, with which the difference to that future that is expected and, to a certain extent, fixed, and whose potentials may also be used, is updated. This also means that the future is not a separate and contingent possibility that can or cannot occur, but is always already present in the here and now. Prevention builds on this time scheme and modifies it: something should be done before a negatively assessed, i.e. undesirable event occurs, whereby it is assumed here that future undesirable states and events can be predicted from the analysis of the indicators of a current state. On the assumption that future developments will also worsen in any case without preventive intervention, it follows that some form of risk minimisation must definitely be brought into play. (Ibid.: 85ff.) It is thus assumed that the predicted undesirable developments will probably also occur without preventive countermeasures and that, consequently, interventions at the earliest possible stage promise the greatest possible risk minimisation. Prevention therefore not only aims to create, but in particular to prevent and avoid. Basically, prevention means working on the virtual: It aims to control the becoming in its supposedly chaotic event-like nature (cybernetics) in order to avoid or anticipate all possible impending dangers. Future events that have not yet happened thus acquire an undeniable presence in the present.

In order to be able to implement preventive measures, selected and negatively evaluated phenomena must first be cut out of reality and isolated in order to establish positive relations between these and the predicted events, which are often of a probabilistic nature, whereupon a field of action is finally designed which serves to prevent future states. In the process, anything possible can be imagined and crystallized as a threat. Be it physical or mental illness, drug consumption, social conspicuousness, terrorist attacks or riots, risks lurk everywhere, imbalances and crises threaten and that is precisely why prevention is needed. This follows the principle of maintaining latency, aiming entirely at the elimination of the negative that seems to be threatening from everywhere and nowhere. Preventive measures themselves can have moments of both repression and production: Prevention both punishes and rewards, threatens and irritates, discourages and educates, accumulates and eliminates; it installs technical control systems and uses social networks.

Important terms to further clarify the term “prevention” are risk and uncertainty. Contingent is that which appears uncertain and at the same time changeable, i.e. what will be could always be different. The clou of this insight, which prevention applies, is precisely that every coming state can also be influenced by preventive action, whereby it is assumed that the contingent always includes the threatening and therefore preventive action is necessary. The contingent must by no means be made absolute, but must be integrated into a field of possibilities constituted by power relations.

Furthermore, the concept of risk must be explained, which should prove to be calculable in the face of incalculable uncertainty. It is precisely risk that is then something against which preventive measures can be taken, which here fall back on a specific pattern of rationality or on the way in which certain objects and their relations are ordered, in order to calculate, compare and evaluate them and then to influence them by means of specific risk technologies within the framework of risk management. First, two strategies for dealing with risks can be presented, namely, firstly, prohibition or control, with which the risk is to be limited by an arsenal of preventive rules (risk avoidance), and secondly, risk management, through which a field of possibilities is constructed, rationalised and economised (risk management). Prevention and prohibitions are thus complementary to each other. Both strategies can overlap and complement each other, so that they can be used in multiple

combinations occur.1

It was the systems theorists Baecker and Luhmann who, in the early 1990s, proposed that, with regard to their risk management, banks should in future work with the distinction risk/risk rather than risk/security. While the term risk is evaluated negatively and the term security positively in order to achieve security by means of specific risk avoidance strategies, the risk/hazard distinction induces risk as the positive term and hazard as the negative term. (Cf. Baecker 2011: 20) Thus, the business of hedge funds and banks can confidently turn to the production, management and calculation of risks and their structuring. The production of risks corresponds to their calculation, just as the social management of contingency itself is considered part of the risk problem. The self-observation of the system from the perspective of risk, which occurs as an adaptation of the financial markets to risk in a very real way, is absolutely necessary in order to be able to carry out and assess the financialisation of the various types of investments. Thus, the concept of risk is again independent of that of danger, insofar as a risk is not determined by the presence of a fixed danger that may emanate from an individual or a concrete group, but as the result of a combination of abstract formulas that reflect the occurrence of undesirable or desired events as probable.

Let us further illustrate the problem of risk determination by the phenomenon of capitalization. In order to set this capitalization in motion, the uncertainty involved in any investment or speculation, the fundamental unpredictability in the calculation of a risk that is in some way predictable, must be transferred. In practice, this movement appears as an already volatile spread between fundamental uncertainty and calculable risk, a spread that has an extended time horizon and should also make it possible to smooth out its volatility. The future returns expected by companies must, since they are unknown, at least be weighted and determined with specific risk factors, which in turn require a risk profile and special risk management. Today it is essential for the financial system to transform uncertainty, since the future cannot be predicted, into a complex risk calculation, and this means that the prices of volatile promises to pay (such as securities and derivatives) can only be determined if future expected profit and income flows for these securities are weighted with specific risk factors and subjected to risk management (portfolio theory), the risks are consequently quantified and fixed in figures in order to be able to make appropriate discounting.

With Foucault, the creation of risk profiles can be interpreted as a process of normalization. By creating profiles, market participants are distinguished from one another and thus individualized on the one hand, and compared with one another and thus homogenized on the other. Today, we are dealing with extremely flexible processes of normalization in the financial markets, whereby each market participant without exception is considered a risk factor that must be statistically recorded and permanently evaluated. However, the process of risk allocation and assessment is by no means based on an invariant norm. Rather, normalization here is to be understood as a variable process within a highly interwoven network characterized by constellations characterized by diversity, flexible rules of modulation, feedback, power and exclusive hierarchies, with new “differential normalities” (Foucault) being constantly created. This applies, for example, to the debt system, whose logic of time is changed from the logic of fixed repayments to the logic of variable payments, from the probable to the possible, to a speculative time in which past, present and future can be subjected to permanent revision (and this even still applies to the private debts of households). Diversities are reduced to a few indicators by means of specific quantifications in order to achieve new homogenizations, units and adaptations. This does not imply the elimination of difference; on the contrary, the verse uses difference as its real substrate to continuously generate standardized organizational systems, statistical systems and power technologies that modulate or even absorb differences.

These specific forms of normalization refer to market populations that are, on the one hand, hierarchical (not everyone has access) and, on the other hand, heterogeneous and, precisely for this reason, must be integrated by means of flexible procedures that use technologies to standardize and, at the same time, use difference (ranking, rating, scoring, etc.). This type of risk distribution or allocation to certain risk subjects and risk groups is strictly quantifying phenomena, which are represented by a-significant signs, numbers, tables, models, statistics, charts etc.

Derivatives capitalize a volatility that they create themselves. All processes of pricing derivatives require the dimensioning and construction of concrete and abstract risks, the former being comparable through the latter. Financial machines allow the construction, distribution and spreading of various concrete risks among market participants (who are in heterogeneous market populations and in competitive relationships with each other) and the bundling of concrete risks, which then receive a single price and a single cash flow as a single risk, i.e. including an abstract risk that is traded as a derivative and exchanged for money. The abstract risk subsumes the concrete forms of risk and provides the production of connectivity and liquidity, both of which are essential to the derivatives markets. Time and volatility are constitutive for the form of the abstract risk, in so far as the derivative is written at a certain point in time and has a maturity in which its price can change. Capitalization requires a certain mode of identifying, calculating and ordering economic entities, socio-economic events, which must first be distinguished and then objectified as risk events. Since every future stream of returns is unknown, no financial capitalization can take place without the calculation of how the respective concrete risk is to be assessed with regard to a future generation of returns.

Ultimately, regardless of the financial markets, but following the logic of capitalization, just about anything that deviates from predetermined target values within a cybernetic feedback system or, to be more precise, what can be identified as the sign of such deviations at all, can become a risk signal. These can be social norms, subjects or actions, whereupon normal distributions are constructed by means of statistics and statistical mean values are collected, which estimate the progressions to which the preventive measures then refer. And forecasts construct the future today precisely as the management of subjects, who themselves are in turn divided into risk categories in order to assign risk profiles to them. At this point, the financial system and the preventive policing of social situations overlap, which here marks the subject rather than the population. It is precisely by identifying the subjects as potential risks that new risk subjects are constantly being produced in the present, on which the boundaries of future risk existence can be drawn between winners on the one hand and losers on the other. In particular, individuals and households that are dependent on credit to secure their subsistence and are thus exposed to financial risks are hit with full force by the fluctuations and cycles on the global financial markets. At present, due to the possibilities offered by genetic engineering, individualised prevention policies reach deep into medical prevention, nanotechnology and positive feature planning or optimisation of individual subjects. This form of neoliberal eugenics works less with disciplinary techniques and is fed by neoliberal selection procedures, new technologies, and the logic of self-enhancement in order to implement its program of gentle human breeding.

If we now observe the political field and the state, we are dealing with a kind of permanent policing and securitization of insecurity, whereby the procedures involved presuppose that new threats, dangers, and risk factors are constantly being identified, which in turn make preventive state action necessary and legitimate. Such preventive policy is then also implemented and, especially in the case of state action, can extend to the liquidation of supposed class enemies or “people’s pests”. This hyperrationalism of anticipating reason is at the same time a totalitarian pragmatism. It pretends to eradicate a risk, as if one would destroy vermin or pull out weeds. Today, to be suspicious

there need be no concrete symptoms of abnormality, it is simply enough to have a characteristic that is considered a risk factor by the experts and technocrats responsible for the definitions and modes of preventive policy. The aim is not only to anticipate individual undesirable actions, but also to construct and analyse the objective conditions under which dangers arise, in order to then design new intervention strategies. The result is a veritable laboratory of risk factors, which creates a potentially infinite multiplication of the possibilities for intervention, provided that prevention raises suspicion to the scientific rank and mode of a calculation of probabilities. In any case, this type of prevention requires extensive governmental data collection and processing (statistics and probability calculation) in order to constitute and at the same time control the population, and to translate uncertainties of any kind into calculable probable risks, so that specific apparatuses of the security state can be built on them. Preventive policies thus require a new type of monitoring, namely that of systematic investigation in advance, with the intention of anticipating and preventing the occurrence of any undesirable events such as deviant behaviour or resistance. And this surveillance can take place without any contact with the source of danger to be analysed and without any direct representation of it. Thus, the state codes constantly pump contingency into their own systems, but only to contain them. The regime of prevention continues to revolve around the problem of the state state of emergency, which is to be actively created and, of course, also established in the long term. Finally, the logic of state prevention can lead to anticipated cleansing and its enforcement.2

Correspondingly, the law-preserving right is constantly being rewritten, at least partially suspended, in order to avert allegedly dangerous political events that could threaten or even destroy the existing legal order. In the name of an unquestioned and allegedly unalternative exceptionalism, the security state claims the rule of life and death and radicalizes the mechanisms of disciplinary adjustment and post-disciplinary control. The discourses of imagined catastrophes (the threat of terrorism and today in particular the “refugee crisis”), which the state is constantly pumping into the media space, today seamlessly serve its own proto-fascist security policies, whereupon they continue to proliferate in the right-wing populist movements, which in turn fuel the state discourses and drive the state security policies before them. In the process, this kind of prevention policy becomes the motor of a repressive normalism that not only tries to exclude, but to pathologize any kind of deviation and anomaly. Normalistic control mechanisms, which operate by means of data collection, statistics and stochastics to prepare, predict and control future unstable situations, begin to overlay the laws and existing normative regulations. Today, this requires special software applications, algorithms, big data and other technologies that accelerate the granularization of normality fields and normative values that, as we have seen, could ever change. So if you want to prevent, you must never stop monitoring, forecasting and controlling. Thus, prevention has long been about more than just avoiding risk; it is also about detailed active risk management, which is both catastrophic and based on probability theory. However, further transformations are taking place, because, analogous to the insurance or securitisation of debt in the financial system, the statistical calculation of the probable is being converted to the algorithmic recording of the possible, i.e. of possible futures. Risk management, biometric engineering and digitisation are techniques that activate a new mode of governance and modality of power, which are geared towards possible, projected futures and allow an increasing use of pre-emptive measures, in particular to prevent possible damage in the future.

As new political risks and social vulnerabilities could constantly emerge, the current security state will inevitably have to take on a paranoid form, with a permanent insecurity of the sense of security, in order to reassure it politically. The threatening future has to be integrated into all possible political decision-making processes, and this is how you quickly move from the criminal case to the criminogenic

situation, from pathological case to pathogenic situation, and such situations must be constantly controlled and monitored. Poulantzas sums up: “In a certain sense, every citizen becomes a priori suspect, because he is potentially criminal” (Poulantzas 1978: 172). In the course of this kind of preventive criminalization, citizens are also themselves called upon to denounce each other, which they are happy to do wherever their authoritarian formation is successful. Such mechanisms of preventive criminalization are at this point based in principle on the construction of an abstract and at the same time virtual enemy, which has been concretized somewhat in the figure of the universal terrorist, who is omnipresent and at the same time non-individual in social space. Imagine a space that is now a smooth space, so that the terrorist can appear in any place he wants, in order to become the total enemy, so to speak, but which is not over-coded by an exclusive enmity, but is instead considered a univoke figure that can be molecularized, divided and scattered into countless multiplicities of possible, equally hostile figures. This type of preventive governance, which we are presenting here, which ultimately aims entirely at controlling populations, is exercised through mechanisms of population division. The object of class war now concerns the reproduction of class, gender, “race” and subjectivity. The war against populations is generated by a virtual-real continuum between economic-financial operations and a new type of political-military operation that is no longer limited to the peripheries alone. To sum it up, the war against populations does not only address terrorists, but in its plural form has long been an instrument of control, normalization and discipline of a fragmented, globalized workforce.

The technologically configured state counterinsurgency aimed at the population is synonymous with a very specific war that cannot be separated from the endless pacification of conflicts. Today, military operations, controlled and monitored by algorithms, no longer have a target that is directly determined by human decisions, but the military “target”, as a result of a computer-controlled operation, is composed of a sum of metadata from whose pattern an enemy is calculated. Algorithms create certain connections and patterns in the haze of data clouds (clouds), and anyone who carries and reproduces these patterns is considered dangerous and can be liquidated by drones, for example. Search engines, such as Google or the Chinese Baidu, operate with algorithms that can predict three hours in advance where a crowd (“critical mass”) will form as a result of search entries. And the program code now has the preemptive function of detecting all possible conflicts and disturbances.

Poulantzas also addresses prevention in his analyses, in particular the preventive surveillance of the population through police-administrative procedures. Here, and this is decisive for him, a transition takes place, from punishing actions that are punishable because they violate existing laws, to the construction of a suspicious case, “which is covered by a flexible, elastic and particularistic administrative regulation” (Poulantzas 1978: 202).

It is therefore not surprising that so-called endangered persons, as laid down in the Bavarian police task law (PAG), will in future be subject to residence orders or bans, be monitored with an electronic anklet or be placed in preventive custody for a long time. The police can also temporarily deprive them of their assets and, of course, their communications may be monitored without interruption. At the same time, enemies of the state, most of whom belong to the left-wing scene, are sorted out.

Take a look at some of the current police strategies. With the use of specific software, it is now possible to make permanent risk prognoses, whereby prevention measures in the context of police work are being perfected more and more through digitalisation. However, it is also the policy of the security authorities who are actively promoting these measures. It is now less a matter of the police investigating crimes that have already been committed, but rather of preventing future crimes, for which they are given the appropriate technological apparatus on the one hand and ever more rights of intervention on the other. Since the 1980s, police powers, which are laid down in the Code of Criminal Procedure

situation, from pathological case to pathogenic situation, and such situations must be constantly controlled and monitored. Poulantzas sums up: “In a certain sense, every citizen becomes a priori suspect, because he is potentially criminal” (Poulantzas 1978: 172). In the course of this kind of preventive criminalization, citizens are also themselves called upon to denounce each other, which they are happy to do wherever their authoritarian formation is successful. Such mechanisms of preventive criminalization are at this point based in principle on the construction of an abstract and at the same time virtual enemy, which has been concretized somewhat in the figure of the universal terrorist, who is omnipresent and at the same time non-individual in social space. Imagine a space that is now a smooth space, so that the terrorist can appear in any place he wants, in order to become the total enemy, so to speak, but which is not over-coded by an exclusive enmity, but is instead considered a univoke figure that can be molecularized, divided and scattered into countless multiplicities of possible, equally hostile figures.

This type of preventive governance, which we are presenting here, which ultimately aims entirely at controlling populations, is exercised through mechanisms of population division. The object of class war now concerns the reproduction of class, gender, “race” and subjectivity. The war against populations is generated by a virtual-real continuum between economic-financial operations and a new type of political-military operation that is no longer limited to the peripheries alone. To sum it up, the war against populations does not only address terrorists, but in its plural form has long been an instrument of control, normalization and discipline of a fragmented, globalized workforce. The technologically configured state counterinsurgency aimed at the population is synonymous with a very specific war that cannot be separated from the endless pacification of conflicts. Today, military operations, controlled and monitored by algorithms, no longer have a target that is directly determined by human decisions, but the military “target”, as a result of a computer-controlled operation, is composed of a sum of metadata from whose pattern an enemy is calculated. Algorithms create certain connections and patterns in the haze of data clouds (clouds), and anyone who carries and reproduces these patterns is considered dangerous and can be liquidated by drones, for example. Search engines, such as Google or the Chinese Baidu, operate with algorithms that can predict three hours in advance where a crowd (“critical mass”) will form as a result of search entries. And the program code now has the preemptive function of detecting all possible conflicts and disturbances.

Poulantzas also addresses prevention in his analyses, in particular the preventive surveillance of the population through police-administrative procedures. Here, and this is decisive for him, a transition takes place, from punishing actions that are punishable because they violate existing laws, to the construction of a suspicious case, “which is covered by a flexible, elastic and particularistic administrative regulation” (Poulantzas 1978: 202). It is therefore not surprising that so-called endangered persons, as laid down in the Bavarian police task law (PAG), will in future be subject to residence orders or bans, be monitored with an electronic anklet or be placed in preventive custody for a long time.

The police can also temporarily deprive them of their assets and, of course, their communications may be monitored without interruption. At the same time, enemies of the state, most of whom belong to the left-wing scene, are sorted out. Take a look at some of the current police strategies. With the use of specific software, it is now possible to make permanent risk prognoses, whereby prevention measures in the context of police work are being perfected more and more through digitalisation. However, it is also the policy of the security authorities who are actively promoting these measures.

Nach oben scrollen