Beyond “Control Societies”

Radical indifference

Today, the behavioral and receptive patterns of the masses consist mostly of repetitive habits, although as users on the Internet they by no means remain passive, but are constantly involved in the pseudo-turbulent events via the social networks, in order to busily and employment-conform, sometimes even racist, produce data and information which the large media groups such as Google, Apple or Facebook extract, quantify and exploit. In the process, a new accumulation logic has emerged within which surveillance capital extracts data on human behaviour, including apparently useless surpluses of data (errors, syntax, signs, etc.), i.e. it absorbs data and errors and constantly combines them with other data to feed intelligent machines that use algorithmic processes to produce predictions on the future behaviour of users, predictions that are then offered and sold as quasi-derivatives on behavioural futures markets. (Zuboff 2018: 125) In doing so, surveillance capital extracts a predictive value not only from the content that a user posts, but also from the exclamation marks and commas that he or she sets or does not set, or, to put it another way, not from where someone goes, but how someone goes. Users may remain the owners of the data they give to the surveillance capitalists, but they don’t get access to the surplus of data. With the data, the specific content of the posts evaporates, indeed the statements mutate into exchangeable and at the same time usable material. At the same time, the consumer must be encouraged to somehow stay on the ball within the 24/7 cycle by producing, posting and commenting on posted material, liking, exchanging and archiving a lot of data himself, so that the surveillance capitalists can easily suck it in and use it permanently as a living data generation machine. There is a radical indifference in Facebook with regard to the respective meanings and content of the posts, which leads to the fact that the “content is judged, quantified and evaluated solely according to the volume, diversity and depth of the surplus that is generated” (ibid.: 578), and this is done on the basis of the anonymous measures of “clicks, likes and lengths of stay – despite the obvious fact that it (surplus) derives its profoundly different meanings from profoundly different human situations” (ibid.: 438).

The radical indifference of surveillance capital to the contents of the posts requires, from the subjective point of view of the users, an increase in the organic composition of ignorance (apart from love for oneself), which, however, is by no means an individual shortcoming, but rather a simple consequence of algorithmic governance, and this ignorance is in turn the prerequisite for surveillance capital to be able to act in this way without further ado a specific speculative capital, which in the mode of simultaneity transforms uncertainties into risks and then also constantly updates and evaluates them. The mode of actualization here, however, must by no means be understood as that of the virtual, as Deleuze demonstrates, but rather is identical with the momentary nature that is now set for the long term, so that it is solely dependent on what is current in the present and what counts in the media.1

This has also shifted the relationship between difference and consolidation/standardization in the media world. In our analyses, we are perhaps still too strongly adhering to the verse (equalization), an inversion and mutation of diversity, which, however, does not include the elimination of difference or socio-cultural differentiation; on the contrary, the verse uses difference as its real substrate to generate standardized organizational systems. New systems of order and power technologies are constantly being created that absorb or at least modulate differences. On the other hand, the operationalization of radical indifference no longer leads to a modulation and consolidation of differences; rather, all differences that affect the most diverse contents are considered equivalent and thus the content or meaning is neutralized. This kind of rigid formatting goes back to a financialised accumulation regime whose first claim is no longer progress or development, but which in its short-term nature or every second must promise the successful management of constantly present risk possibilities, regardless of the underlying assets to which the insurances refer.2 In this process, anything and everything, every event, every content and every meaning can be transformed into risk. Analogous to this, the content that is available on the social platforms in images, videos and texts materialize, the tradable raw material that surveillance capital extracts as data. This is a surplus in that it also extracts data that goes far beyond what would actually be necessary as an information service for the user, but which is a resource that needs to be capitalized by the surveillance capital. Ultimately, all differences (of content) are considered equivalent, or, to put it another way, every specific meaning dissolves in the data stream, so that a radical indifference prevails. Difference is equal to indifference, this is the magic formula. This kind of content-neutralizing equivalence makes at least the visible, flowing text on Facebook generally susceptible to all kinds of fake news, which is usually accepted by the surveillance capitalists as long as there are no objections from the political side, since after all, any content counts as data raw material for the invisible shadow text or the black box of algorithms, which the “machine intelligence” then operationalizes so that the surveillance capital in the behavioral derivative markets can offer advertisers precise prediction products. This type of extraction and production of data or behavioural surpluses requires a project that must also necessarily produce certainties far beyond the transformation of uncertainty into a calculable risk, i.e. it must not only anticipate precise forms of user behaviour, but also actually motivate and control users to adopt the behaviours desired by the surveillance capital.

The postfactual age is immanent to the incessant generation and interpretation of data and information (and their meanings), but this does not mean that the meaning itself disappears. The liquefaction of meaning in the endless mass of data as a result of the permanent search for patterns and correlations in the produced data does not mean, as Baudrillard and his simulation theory assume, that signs merely circulate in the as-o-ob, but that the extraction of “meaning per se” is becoming increasingly intense, precisely because of the fact that meaning and interpretation are still necessary, regardless of what is now meant in detail. This lies in the last instance in the capitalization oriented towards the future, which includes both money and bits in their exchangeability as well as an increase in capital calculated for the future, with the sole purpose of staging everything and anything as a financial investment or derivative, which, regardless of the respective determinacy of an underlying to which the derivative in each case refers, is to generate nothing but yield.

With regard to the interchangeability of bits, the computer proves to be a character transformer that processes pure information, but not without content, but with arbitrary and interchangeable content. Just as money must be exchangeable for goods, regardless of which, bits must mean something, regardless of what they mean. Money and bits indicate communication exclusively under the aspect of the negation of a specific meaning.3 Or, to put it another way, to the exclusion of any meaning except that it must be meant incessantly, so that the fact that it is and the thwarted perspective of meaning clearly comes to the fore here.4 Therefore, especially in the current data invasion, there is no general loss of meaning, but rather an overproduction of meaning that is complementary to the indifference of capital towards any specific meaning, but it must still be meant, otherwise the system would fall apart.5 This kind of overproduction of meaning constitutes the real loss of meaning and truth. Betancourt summarizes these connections as agnotology. He writes: “The problem of the information-rich society is not access to information – accessing information becomes an everyday matter due to the constantly activated computer networks – but is a question of coherence. Agnotology works in the production of decoherence: it undermines the ability to determine what information is truthful and permissible for the construction of interpretations”. (211) “Agnotology has the function of eliminating the potential for contradiction.” (234) As a correct term for the constitution of this data empire, Betancourt proposes “agnotological capitalism”: “a capitalism systematically based on the production and maintenance of ignorance.” (233) Ignorance is thus less the consequence of a lack of information than, on the contrary, the result of an abundance of information.

So draw today, the creative capacity of high-tech paranoia is less due to a lack of orientation knowledge than to the overproduction of meaning resulting from the game that meaning exists in the first place. If meanings thus become interchangeable in multiply circulating artificial interpretative procedures, from which the battles for interpretations first arise, then a mad search for meaning follows quasi inevitably. While capital can certainly live with this kind of loss of meaning through the overproduction of equivalent meanings, this is not readily possible for the state, because with it the equivalence of all meanings is called into question if it wants to prove itself as the standpoint of all standpoints.

One could say with Lacan that the unfiltered data stream is the realm of the real, while information and metadata reflect reality, a world made intelligent by cognitive filters and technological infrastructures, which itself is composed of registers of the imaginary and symbolic. An incessant flow of data, information and opinions, which react to each other in an almost hysterical way, especially in social networks, as equal and as different at every moment, in order to create above all delusional aggregates and illusional waste of any kind.

We have to speak of a trend here, since Google & Alphabet generates a large part of its turnover with advertising based on the auctioning and renting of search terms, so that it can be said that data is the raw material of the digital economy, but linguistic meanings do not disappear completely with it, so that there can also be a systematic capitalisation and monetization of linguistic meanings. Even in the financial sector, there is a consensus that capital is not exhausted in numbers, charts and models based primarily on a-significant semiotics, but that companies need to be dressed up semantically and narratively.

Although the value of brand companies in particular cannot be determined independently of their material assets, business practices, product portfolio and measurable and expected future economic success, symbol systems and narratives still play a role in determining the market value of brand companies in particular. While money as the form of the general equivalent of goods levels out their qualitative differences, brands always promise the incomparable and singular, they do not only use markings of difference, but differentiated systems of signs and symbols. Here, the incommensurable meaning, which has no equivalent, has meaning itself. The brands have even institutionalized the appropriation of symbolic capital, whereby it is up to the individual to decide whether he or she wants to fall back on the symbolic worlds and the repertoire of signs of the brands. Moreover, the algorithmic operationalization of lexical material makes it possible to objectify, quantify and monetarize the value of a word. The term exchanges transfer the mechanisms of price formation to language and its lexical material. Every word, every term has its price, which is determined in a digital auction. And this monetary value in turn does not remain without influence on the choice of words and the use of language, whereby outstanding terms and combinations of terms are preferred. Money has taken possession of language, while the algorithmic production of language simultaneously generates a high degree of communicative liquidity and indifference.

Quantification tool

Traditional tools of disciplining and evaluation are not necessarily those of quantification. The new techniques of quantification cover much more than one can even imagine. They are not prostheses of human consciousness, but rather, they imbed a transformation that goes far beyond human perception. While a camera still offers a magnification for existing human observation, a fitness tracker, for example, allows us to engage with the world in a way that questions previous modes of subjectivation. This tracker is not a tool for human perception, but an instrument of sensation, whose function is to observe processes in almost incomprehensible detail by generating quantitative data. Fitness trackers do not replace human cognition, but perform sensory activities of movements (steps, sleep, pulse, etc.) that have not yet been observed and quantified. They open the possibility to experience something that is not directly accessible to human consciousness. As tools of Sensation

instead of being perceptual and evaluation, digital tracking technologies perform relatively mundane functions such as counting, as much as possible (quantification), without necessarily comparing (measurement). Portable self-tracking technologies tend to observe aspects of the human being that are countable (steps, inhalations, heart rate, etc.), even in the absence of measurement. Where these movements are quantified, measurement can of course be reintroduced as the target or the norm of a number of steps. But the primary task of trackers is to count, not to compare. If one focuses on the question of pulse both somatically and metaphorically, one comes to the conclusion that a possible crisis of measurement does not necessarily have to be a crisis of quantification. The requirements of sensory tools and data analytics are what Lefebvre calls “internal measures” of data that are constantly accumulated in real time. This measure necessarily exists in time, referring to a rhythm that originates from the body or a social context, such as the pulse rate. A healthy heart rate is one that is in sync with a situation, but also varies in time in a reasonable and insightful manner.

Today, the form of government appropriate to financial capital necessarily includes the quantification device. Steffen Mau writes in his book Das Metrische Wir: “The demand for transparency, in its exaggerated and ultimately totalitarian form, means that each and every one of us is legitimately and permanently checked, observed, classified and evaluated in all aspects of life. (Mau 2017: 231) The new totalitarian mass movement would accordingly be, to put it somewhat exaggeratedly, the permanently self-counting and evaluating and comprehensively counted and evaluated population. In the context of neoliberal governmentality, which Foucault described so vividly, far-reaching dispositives of quantification as well as of evaluation, calculation and comparison have developed due to the digitalization of capitalist production processes, the state apparatuses and all social institutions, which rely on constantly modulating processes of increase and outbidding in terms of the profitability, efficiency, performance and transparency of activities in all possible social spheres, leading to a comprehensive quantification of the economic, social and political bodies that stir up, penetrate and superimpose the previous stratifications, class divisions and their symbolic procedures of distinction.

And even the most private sectors of the population are now integrated into these quantification processes, whereby the actors are constructed and written down as data, which in turn are transformed into numbers. These monitoring and quantification processes today encounter strangely passive-active individuals, passive in that they are constantly divided by ranking and rating procedures and thus generated as individuals that are constitutive for tests and samples, active in that they are motivated by desire, tirelessly simulate individuality and, in this context, demonstrate their active willingness to voluntarily provide data, to participate in rating and ranking procedures and to cast every day. Such participation is not only inherent in the quantification system, but it also constantly intensifies its effects, which are not only in the quantifications produced by the methods and in numerical comparison, but also intensify the competition between the people who evaluate and those who are evaluated.

In the context of the implementation of the quantification dispositive, there are thus also constant battles between the actors for their positions; there are ongoing actions that serve to improve ranking and which are inscribed in objectified orders, hierarchies and complexes of valence through continuous measurement. (Mau 2017: 49ff.) This objectified quantification disposition includes manifold procedures of counting, evaluation and integration, whereby the status of objects, phenomena, facts and actors is translated into the abstract language of numbers. In this mathematical universe, exact, i.e. unambiguous, quantification and order relationships prevail, within which comparative methods, differentiations and scaling play an important role. And it is precisely for this reason that the quantification dispositif in public surrounds an aura of neutrality, correct verifiability and exactness, thus offering a useful and supposedly ideology-free representation of socio-economic conditions,

which, when processed by machine intelligence, seems to be decoupled from human decisions and practices, and even from power relations, one thinks here of apparently objective indicators such as gross domestic product, unemployment rates, indicators of public debt and many other parameters that can be traced back to statistical procedures, but which, when they come into the public eye, can nevertheless lead to not inconsiderable distortions in the various social areas.

By means of the continuous inscription of various indicators, data and figures, the quantification dispositive unfolds, whose metrics, evaluations and calculations penetrate deeply into the political, economic and social fields, structuring and regulating them, up to the various procedures of self-optimization of the actors. The result is a normalizing and at the same time performative ritualization of all these fields, i.e. a socially constructed measurement using the syntax of numbers.

The proliferation processes of ubiquitous quantification can be seen in numbers. It is estimated that the amount of data in digital space will grow by a factor of 300 in the period from 2005 to 2020 (Mau 2017: 41), whereby not only the production of the data and its storage capacities will increase, but also the potential for the creation of links and clustering of the data with constant improvement of the algorithmic procedures and the strategies and processes of big data (data mining and analysis). It is easy to demonstrate that the quantification disposition is part of an expansive capitalisation, even in areas that have so far been largely removed from the logic of profit, such as education, administration, health care and cultural institutions such as museums, all of which are now subject to an efficient allocation of resources by permanently installing evaluations, tests and audit procedures of the actors integrated into these institutions, which are now themselves on the one hand ultra-visualized in the rankings and on the other hand forced into the competition, which demands constant adaptations, whereby the actors themselves are reduced to living data, who sit in front of the flickering screens and type their behavior into the computers, whereby they can be quantified in an apparently unsuspectingly comprehensive way, while they are nevertheless quite consciously exposed to these processes when it comes to the advantages in ranking and rating over the mass of competitors. The problem of credit points, the various programs of evaluation and optimization take on their gentle regime to the finest detail, proliferating into the last cracks of offices and corporate situations, power becomes fluid, it becomes “gaseous”, as Deleuze says, it organizes itself in networks, in which each node potentially contains the information of the entire system, it becomes micrological or even nanotechnical, it becomes “interactive” by initiating an uninterrupted game of action and reaction, in which the actors end up filling the gaps and pores of external control by means of techniques of self-control. In the process, the state and above all the private surveying companies permanently generate incentive systems transformed into numbers in order to further strengthen the actors’ willingness to perform and give evidence, whereby very smooth algorithmic shadow texts, which are not visible to the users, accompany, even direct, the adaptation processes.

Algorithmic governance

Ever-known facts such as satellite monitoring, enormous computer capacities on silicon chips, sensors, networks and predictive analytics are now components of the global digital systems that are currently comprehensively quantifying, analysing, evaluating and capitalising on the lives and behaviour of populations. Under pressure from the financial markets, Google, for example, sees itself forced to constantly increase the effectiveness of its data tracking and its analyses generated by machine intelligence, and precisely for this reason to fight every claim by users to protect their own privacy with the most diverse means. Thanks to a range of devices such as laptops and smartphones, cameras and sensors, computers are now omnipresent in capitalized everyday life; they are character-reading machines that execute algorithms (necessarily computable, formally unambiguous procedural instructions) and only develop their full power in the context of digital media of networking, for which the program-controlled design, transformation and reproduction of all media formats is a prerequisite. In this game, the social networks in particular enable a kind of economy that, due to the extraction of personal data that is used to construct metadata, cookies, tags and other tracking

-technologies, has established a strangely new algorithmic governance. This development has become known above all as “Big Data”, a system based on networking, databases and high computing power and capacity.

According to Bernard Stiegler, the processes involved are those of “grammatization”. In the digital stage, these lead to individuals being guided through a world in which their behavior is grammatized when they interact with computer systems operating in real time. For Stiegler, however, grammatization begins with cave paintings and leads via the media of cuneiform writing, photography, film and television to the computer, the Internet and the smartphone. The result of all this is that the data paths and traces generated by today’s computerization technologies constitute ternary attention-reducing retentions or mnemonic techniques that include specific time procedures and individuation processes, that is, “industrialization processes of memory” or a “political and industrial economy based on the industrial exploitation of times of consciousness. According to Stiegler, the digitalization of data paths and processes, which today function with an urgent urgency by means of sensors, interfaces and other means and are basically generated as binary numbers and calculable data, will create an automated social body in which even life itself is still transformed into an agent of the hyper-industrial economy of capital. Deleuze already foresaw this development in his famous essay on control societies, but the forms of control will only come into full force when digital calculation integrates the modulations of control techniques identified by Deleuze into an algorithmic governance that also includes the automation of all existences, ways of life and cognitions.

One of the central dilemmas of postfordism is to establish conformity, consensus and cooperation without using disciplinary techniques of power that damage creativity and affective values too much. The boundaries of the economic seem to be blurring, but there must still be economic rationality if companies are to survive in competition. Postfordist control societies are no longer explicitly dominated by strategies; rather, the ideal of control consists in letting power infiltrate tactics to penetrate contingent and emergent rhythms of everyday life, in the sense of cooperating with the goals of management.

The power technologies of protocols that underlie the Internet are a-normative, since they are rarely widely debated in the public sphere; rather, they appear to be immanent to algorithmic governance. The debate is now: Do data generate digital protocols or digital protocols generate data? Or more narrowly: are data digital protocols? In any case, the setting of the data has a structuring character, and not only the results of machine data processing. Like any governance, if we think of it in the sense of Foucault, algorithmic governance implements specific technologies of power, but today it is no longer based on statistics that refer to the average or the norm, instead we are dealing with an automated, atomic, pulsating and probability-based machine intelligence, which operates forensics and data mining independently of the medium – automatic computing collects, captures and mobilises data on the behaviour of market participants at breakneck speed, close to that of light, using the methods of artificial machine intelligence, controlled and capitalised by surveillance groups by extracting their data. 6 The digital machines that continuously collect data and read and evaluate data traces thus mobilize an a-normative and an a-political rationality that insists on the automatic analysis and monetary valorization of enormous amounts of data by modeling, anticipating and thus influencing the behavior of the agents. Today this is trivialised as ubiquitous computing, in which, and this must be pointed out again and again, surveillance capital extracts the behaviour of the users and produces the prediction products based on it, which are created by the algorithms of surveillance capital, not only on the Internet but in the real world, and then constantly diversifies and deepens the prediction products through special procedures. Everything, whether animate or inanimate, can now be data, connected, counted and calculated.

And so signals are constantly flowing from cars, refrigerators, houses and bodies etc. as data into the digital networks, which are sold to those advertising customers who carry out targeted advertising. (Zuboff 2018: 225).

To put it more precisely: the capital of the companies Google or Facebook automates the buying behaviour of consumers, channels it by means of the famous feedback loops of AI machines and binds it in a targeted manner to companies, which in turn are advertising customers of the monitoring capital. The promotional behavioural modifications to be achieved among users are based on machine processes and techniques such as tuning (adaptation to a system), herding (targeting the masses) and conditioning (training stimulus-response patterns) that guide user behaviour in such a way that the machine-constructed predictive products actually drive their behaviour towards the intentions guaranteed by Google or Facebook. (ibid.) Maximum predictability of user behavior is now a genuine source of profit: the consumer using a fitness app is best advised to buy a healthy beverage product at the moment of maximum receptivity, for example after jogging, which is made palatable to him by targeted advertising. Sports goods manufacturer Nike, for example, has bought the data analysis company Zodiac and uses it in its New York stores. When a customer enters a store with a Nike app on their smartphone, they are recognized and categorized by the geofencing software. The app’s start page also changes immediately and instead of online offers, the new products in the shop appear on the screen, i.e. special offers and recommendations tailored to the customer. Particularly loyal customers receive small gifts in the shop and can even have the desired goods delivered to the changing room by smartphone.

Surveillance capital has long since ceased to be just about the sale of advertising. It very quickly became the model for capital accumulation in Silicon Valley, and is still used by almost every start-up company today. But the model is not limited to individual companies or the Internet sector, but has spread to a wide range of products and services across the entire economic sector, including insurance, health care, finance, cultural industries, transportation, etc. Almost every product or service that begins with the word “smart” or “personalized”, every Internet-connected device, every “digital assistant” in the corporate supply chain is an interface for the invisible flow of behavioral data on the way to predicting the future in a surveillance economy.

In the role of investor, Google quickly gave up its declared antipathy to advertising, instead deciding to increase profits by using its exclusive access to users’ data drop, combined with analytical capabilities and computer power, to generate predictions about users’ click-through rates, which are seen as a signal for the relevance of an advertisement. In operational terms, this meant that Google was transforming its growing database to make it “work” as a behavioral data surplus, while at the same time developing new methods that aggressively seek sources of surplus production. This surplus data thus became the basis for new predictions called “targeted advertising”.

The surveillance capitalists first set themselves in scene by making statements in which the private “experiences” of the users were regarded as something that could simply be taken, and then consistently translated the “experiences” into data and appropriated them as private property and exploited for private knowledge gain. Google simply explained that the Internet is merely a resource for their search engine, adding that the private experiences of users serve the company’s profits. The next step was to move the surplus operations beyond the online environment to the real world, where data on personal behaviour is considered free to be simply stolen by Google. And the surveillance capital no longer requires the population in their function as consumers; instead, supply and demand orient the surveillance companies towards businesses that are geared towards anticipating the behaviour of populations, groups and individuals.

Shoshana Zuboff describes data extracted from all kinds of human activities as a behavioral surplus as an attack on the experience itself. But what experience? Successful advertising today constantly absorbs qualities of real experience. These can be concerts, multi-media events, games and VR applications. From 3D to

to VR, the aim is to eliminate any barrier between person and experience. These experiences, which are then no longer disturbed by any media, are translated into what Bernard Stiegler calls conditioning. Aesthetics is now both theatre and weapon. And all this results in a misery in which conditioning substitutes experience.

It is no longer enough to automate the streams of information that illuminate the population; the goal now is to automate the future behavior of the population itself. These processes are constantly being redesigned in order to eliminate any possibility of user self-determination, which is not insignificant for the capital markets when predictions about the behavior of populations are not only probable, but approach the certainty that the desired behavior will occur. In competing for the most efficient prediction products, surveillance capitalists have quickly learned that the more behavioural surpluses are appropriated, the more accurate the predictions are, and the more the surplus can be varied, the higher the predictive value. This new drive of the economy leads from the desktops via the smartphones into the real world – you drive, walk, shop, look for a parking space, the blood circulates and you show your face – everything should be recorded, localized and marketed.

Zuboff writes that the surveillance capitalists claim the right to know, to decide who knows and to decide who decides, all by themselves. They dominate the automation of knowledge and its specific division of labour. Zuboff further writes that one cannot understand surveillance capital without the digital, although the digital can also exist without surveillance capital: surveillance capital is not pure technology, but digital technologies can take many forms. Surveillance capital is based on algorithms and sensors, artificial machines and platforms, but it is not the same as these components.

The limit for cheap data lies in the superimposition that is placed over other spheres of life to draw off their forces. In this respect, resources can also be extracted from people and nature that have already been cheapened by capital. At these borders, industrial labour, such as that which works in the centres at Amazon, is tracked and observed, thus providing double time for the company, which on the one hand benefits from the work, while on the other hand accumulating data on the movement of bodies in space. Friends and families offer each other the necessary but unpaid support (cheap care) on digital platforms such as Facebook to maintain social cohesion and reproduce the workforce while producing masses of usable data for the platform owners. This magic trick of collecting data as a by-product of various types of cheap labor is a big coup for capital and another way of extracting every human residue. As Moore says, the new cheap (labor here) allows new strategies to survive crises, because the superimposition of cheap data helps to solve the crisis of stagnant productivity and growth by collecting all kinds of existing work as a service to data producing machines.

The increase of cheap data does not only refer to the data extracted from human processes. While Google and Facebook are working to manipulate clicks and behaviors, data is being collected from the movement of machines to the growth of plants to the movement of interest rates. This data is used in various ways to train machine-learning systems that manipulate populations or create new markets, data that shape the world beyond life. When Zuboff absolutizes human behavior as the realm of extraction and control, she limits her arguments to a critique of surveillance, leaving capital and labor largely unexamined.

The behavioral surplus model and the metaphors that entwine themselves around what the data describes as fluid, cascading and overflowing ignore the ways in which the production of cheap data often enough requires both concentrated and tedious labor. When we voluntarily upload thousands of images of our faces, which are then used by the platform owners, these images often enough need additional tagging or categorization to be useful for commercial purposes, because the

the pictures do not describe themselves. This is where cheap labor comes into the picture,

The digital piecework of workers such as those employed by Amazon’s Mechanical Turk is essential to produce the cheap data collections that many AI systems and research projects require. ImageNet, the main database of images using visual object recognition software development, is based on the work of the MTurk workers who sort and tag millions of images day after day to merge them into a data set used by military research through companies such as IBM, Alibaba, and Sensetime, the latter providing the technology used by the Chinese government to control, for example, the Uighur minority population. Recent research has highlighted the stress and horror of workers employed in digital factories as they sort out images of ISIS crimes or scan large social platforms for days on end for hate speech and violent videos. Like all cheap things, cheap data relies on massive externalities to reduce risk or shift it to other people and natures, while profits flow in the opposite direction.

Damage to workers is just one of the externalities that come with the hunt for cheap data. The cheap energy required to train AI models and massively transfer large amounts of data to and from the cloud is less visible than the exploited workers, but the cumulative effects are enormous. Research estimates that the energy required to train a single AI model is five times the CO2 emissions of an average car over its lifetime. The hardware needed to handle all these models and collect the data requires large amounts of valuable metals and new plastics. Cheap nature is needed when it is extracted from cheap labor to make the fiber optic cables and computers that collect and connect data.

What happens when cheap data becomes more expensive over time? For example, if the wages of precarious workers in data processing rise, or if there are increased controls on privacy, making it more difficult to produce the behavioral surplus? It can then be assumed that the extraction of data requires new cheaper areas and boundaries. This process has already begun with the offshoring of digital assembly line work as the big tech companies spread out in the global South to create new markets and extract data from new segments of the population.

A company like Google needs to gain certain dimensions of scale and diversification resources as soon as it starts collecting data that reflects user behavior and also serves to track behavioral surpluses (Google’s data offshoots), The products attack the user like “heat-seeking missiles” (Zuboff) in order to suggest exactly the right fitness product to him, for example, at a pulse rate of 78 via inserted advertising. Thus, with increasing diversification, on the one hand a broad diversification of monitorable topics in the virtual world must be achieved, and on the other hand the extraction operations must be shifted from the net to the real world. In addition, algorithmic operations must gain in depth, i.e. they must target the intimacy of the users in order to intervene in their behaviour in a controlling and formative way, for example by displaying pay buttons on the smartphone at the right time and in the right place or by automatically blocking a car if the person concerned has not paid certain insurance amounts on time.

In the case of a search query, factors such as search terms, length of stay, the formulation of the query, letters and punctuation are among the clues used to spy on the behaviour of the users, and even these so-called data emissions can be valorised if this surplus of user behaviour can be used for targeted advertising, whereby Google also allocates the best advertising spaces to the most paying advertisers, determined by algorithmic probability, whose prices are calculated from the price per click multiplied by the probability that the advertising will actually be clicked on. In these procedures, one ultimately also finds out what a particular individual thinks at a particular place and at a particular time. Singularity indeed. Every click on an advertising banner placed on Google is a signal for its relevance and relevance for the

and is therefore a measure of successful targeting (Zuboff) . Google is currently registering an increase in paid clicks and a fall in average costs per click, which is tantamount to an increase in productivity, since the volume of output has risen while costs have fallen.

The pool of data from which analysts of the monitoring capital can now draw is infinite. Surveillance capitalists know exactly who complains to companies, calls hotlines, or talks about a company in online portals, and how often. They know the favourite shops, the favourite restaurants and pubs of many consumers, the number of their “friends” on Facebook, the originators of ads that social media users have clicked on. They know the colour of a person’s skin, their sex, their financial situation, their physical illnesses and mental health problems. They know about a person’s age, job, number of children, neighborhood, size of apartment – after all, it’s interesting for a company that makes mattresses to know whether a customer is single or, if so, should order five foam mats for the entire family.

Today, the therapeutic group is materialized in Facebook, in its invisible algorithms, and at the same time has reached a largely imaginary group addiction of unimaginable proportions. And this is where the theory of simulation is wrong, because there is nothing wrong with digital networks, they are very real and create a restless stability for those connected to the networks, by simply expanding things, more requests, more friends and so on. Herbert Marcuse once wrote that one of the boldest plans of National Socialism was to fight against the tabooing of the private sphere. And it is precisely privacy today that is so free of any curiosity or secret that one writes everything on one’s timewall without any reservations or even eagerly, so that everyone can read it. We are so happy when a friend comments on something. And you’re constantly busy managing all the data feeds and updates, at least you have to get some time off your daily routines. Your tastes, preferences and opinions get a price you’re happy to pay.

What is new is not just the technological power structures (protocols and algorithms; the network is the message), but an accumulation model that uses digital means to collect all kinds of data that reflect user behavior and extract from it the excess of behavior, so that future behavior can be predicted as accurately as possible and targeted advertising can be delivered, with improvements in prediction as well as on-demand increasing click-through rates on the advertising banners. The behavioural surpluses also result from rendering more behavioural data than necessary to feed into artificial machines (ranking, statistical modelling, prediction, speech recognition and visual transformation), which then produce predictions of user behaviour, quasi-derivatives that the monitoring capital on the behavioural futures markets sells to the highest bidding companies. Google conducts auctions on these markets in which an enormous number of “derivatives of behavioral surplus” (Zuboff 105) are auctioned. It can be concluded that the algorithmic operations with data generate profitable products, all aimed at predicting and monetizing the future behavior of users, for very specific companies: Google’s key customers are those who need advertising and pay Google to provide them with an effective range of predictive products, which in turn are based on comprehensive user monitoring. These products are therefore based on Google’s ability to accurately predict what users will do, think and feel now and in the near future (Zuboff 2018: 119), which also aims to limit the risk for advertisers so that relatively safe bets can be made on future behavior. However, the addressees in the markets in which Google sells prediction products are not exclusively advertisers, but ultimately all those who have an interest in purchasing “probabilistic information” ( ibid: 120), i.e. also states, for example, especially their intelligence services, which therefore maintain a close relationship with companies in Silicon Valley.

The prediction machines are a kind of black box whose internal processes can be perceived far beyond the human capacity of perception. At this point, Zuboff speaks of a shadow text in which the machines provide the relevant action instructions.

which are mostly aimed at influencing the consumption of users. For example, the algorithmized selection of images that Instagram displays to a user includes streams of that user’s behavioral data, the data of his friends, data of people who follow the same account as the user, and data and links of his activities on Facebook. (ibid.: 555). There is a multitude of data and operations that even the programmers of the machines can no longer see through. By querying the Likes, Facebook can record a comprehensive spectrum of a user’s personal behavior, including sex, political views, consumer behavior, intelligence performance, etc. Seen in this light, the Like reward system sets the necessary dopamine injections at the right timing to constantly spur the activities of users to produce further streams of data. A mail that does not receive Likes means social death for the user.

The new automatic systems model the social in real time, contextualizing and personalizing social interactions, whether in healthcare, business or administration. We should add, with the French authors Rouvray and Berns, that the new algorithmic governance also includes technologies with territorial and spatial dimensions, which are applied, for example, in the programs of “Smart and Sensored Cities”. With Google’s Street View, Google turns the public space into an impersonal space of spectacle, transforms it into a living tourist brochure, with the sole purpose of monitoring and extracting data from users (ibid. 169), so that in the end one can even speak of an expropriation of the paths and space (which, in the loutishly ingratiating language of the “smart”, comes along with it), precisely because Google succeeds, beyond the exploitation of online data sources, in monitoring the real world more and more comprehensively across the board, when people are permanently tracked along certain paths and simultaneously steered to certain destinations via behavior modification machines. These smart instruments are based on “automatic computing” and “ambient computing”, i.e. on technologies whose invisibility makes individuals even more active and efficient, because these technologies weave themselves into the factory of life unnoticed and at the same time stimulating and intensifying behaviour until they are indistinguishable from it. Algorithmic governance focuses entirely on relations, on relations of relations, which in turn are reduced to correlations, because the models of artificial neural networks detect correlations and patterns in particular, but never causes or the explanation of causalities; they serve to classify, bundle and optimize behavior, but are far from understanding.

It is therefore a matter of active intervention and the shaping of user behaviour in the future. And the more data traces they leave behind, or, to put it another way, the more data can be extracted by the methods of diversification and, moreover, by a depth that reaches far into the user’s interior, the more precisely the self-learning algorithmic machines (voice search, neural networks, feedback, etc.) can be used. The more precisely the self-learning algorithmic machines (voice search, neural networks, feedback, etc.) can process the data in order not only to make the right purchase decisions for the users in advance, but also to gently massage their retentions and protentions, but ultimately to condition them harshly, and this means that the optimized autonomy that the surveillance capital promises the customer is a pseudo-autonomy, because it only serves as a meagre promise, because ultimately the profitable prediction products are automatically produced and distributed without contradiction.

In their essay on algorithmic governance, the sociologists Rouvroy and Bern show that the size, speed and performativity of algorithms – elements related to relational data processing – far exceed the capacities of human decision making. They describe this governance as “a certain type of (a)normative or (a)political rationality founded on the automated collection, aggregation and analysis of big data so as to model, anticipate and pre-emptively affect possible behaviours”. Google can easily predict that someone will probably want to buy a suit in the near future, and if that someone is near a shop window behind which suits are displayed, then an advertisement on the mobile phone will pop up, carefully-maintained, recommending that a suit on display be purchased. Life itself is immersively integrated into the production of data, into a pulsating glow of information, but beyond that we ourselves have become data, which are related to machines that make our lives

capture, capture and process as if we ourselves were parts that are modulated and held ready for machine intelligence. There is no need to emphasize all of this anymore, because it is part of the habitus through which the subjects themselves are machinized when they stream, update, capture, share, link, verify, map, save, troll and trash, even if it produces the highest boredom.

The investments affecting the individual now extend to genetic manipulation. Neurotransmitters, organs, biological components and somatic identities are open to exchange. It degenerates into a comprehensive enthusiasm for freedom if the body is constantly filleted in terms of monetary value. Even the body itself mutates into a profitable business, implants, transplants, surgical operations, and sales of organs, blood, and germ cells. This is precisely what leads to asomatognosy, an ignorance of one’s own body. The term refers to the loss of perception or the feeling of belonging to one’s own parts. “Self-Tracking” – the extraction, collection, counting and evaluation of data on all conceivable features and functions of one’s own body by means of various apps and procedures – describes a new form of optimizing one’s own self. Such apps and technologies can automatically record, catalogue and then graphically display all kinds of data. Every missed jogging session, every excess calorie, every dreamy minute of work is immediately registered and reminded so as not to be suspected of not getting the maximum out of the body.

Virgin Pulse is a product for the smart workplace that promises to provide a technology for filling the modern worker with energy. It is an app that allows workers to monitor their own behavior in terms of functions such as sleep, activity, happiness, stress and relaxation in order to change behavior to lead healthier and happier lifestyles. It is a portable technology, which in some companies is equipped with personalized suggestions for improvement for each employee, using “gamification” techniques with specific objectives, which also stimulate competition with other employees. In fact, the plans create a feel-good data dashboard that company managers can view. But Virgin Pulse also exhibits something that has received little attention until now, namely empathy for the pulse in the sense of signifying the constant 24/7 stream of data that the program generates and analyzes, i.e. the pulse of an organization that allows the observation of vital signs: movements, rhythms, patterns, highs and lows. These should be emergent and self-regulating, so that measurement and discipline take a back seat to quantification. By tracking, quantifying and representing behaviour, this portable technology changes repetitive and ordinary daily routines. The significance of the pulse and its observation can be classified as pre-cognitive rather than cognitive and normative, leading to unconscious adaptation to the environment.

At the same time, however, three forms of comparison occur: First, the comparison with oneself, made visible as progress or regression compared to previous activities. The comparison with (concrete) others through the competition of data. And the comparison with standardized average values such as the Body Mass Index. The technologies of self-tracking now make it possible for the first time to capitalize on the entire body and its organs. The entire lifestyle such as eating, sleeping, movement, relationships and emotions can now be researched and converted into numbers – and all this in real time. Thus, not only the quality of sleep or physical activity is recorded, but also long-term ECGs are carried out, genomes are sequenced, laboratory scans, tests are carried out on the heart and kidneys or on mental condition. Diseases and their development are analysed in real time and thus become increasingly predictable. This is the quantification of “digital phenotyping”, so someone who types slowly seems unhappy and someone who hammers on his smartphone very quickly may be in a manic phase.

All in all, algorithmic governance generates both a new politico-economic field and a new regime of data management, characterized by a specific technological performativity, the most important points of which we will record here:

1) The permanent capture of data, see Google’s attempt at

the digitization of books, the collection of personal data through Google Street-View, the circumvention of privacy, the storage of search data, the transmission of location through the smartphone, face recognition, body sensors, drones equipped with sensors, and the generation of data doubles These are expropriation operations to which the user is subjected day and night, depriving him of his experiences, emotions and face.

2) Digital operations that process this data. The machine processing of this data involves the extraction and modification of behavioural characteristics, which are sorted by patterns, clouds and relations and integrated into rankings in order to evaluate the correlations that exist between the data of the individuals. This data mining appears to be absolute, in so far as the subjects have no possibility to intervene, so that we must speak of a totally automated production of protentions (expectations) that liquidate the difference between performative and constativist behavior. The automatically produced protentions are now processed in automated and networked AI systems. In these networks it is by no means true that the whole is greater than the sum of its parts; rather, there are no longer any parts at all, because the whole is omnipresent and manifests itself “in each of the devices built into all machines” (Zuboff 2018: 472). This is a modular system within which, in principle, every machine is the same machine and operates in the same logic as all other machines, even if modulations and transformations do occur. These artificial learning machines in turn require material infrastructures, or, to be more precise, configurations composed of hardware, software, algorithms, sensors and connectivity – configurations that today equip and design all kinds of objects – cameras, chips, nanobots, televisions, drones, etc. (ibid.: 156).

The digital machines integrate the individuals into the algorithmized field, where they appear as autoperformative effects of correlations of data. And the field in which the automated actions of the individuals are integrated is not situated in the present, but in the future. At the same time, through methods of perfect adaptation, virality and plasticity, algorithmic governance still reintegrates every disturbance and error into the system in order to redefine the models and profiles of behavior. It seems that structurally, the power of the algorithmic system can never be disrupted or the improbable can never be disturbed. Although this will not be possible, the surveillance capital, by using algorithmic performativity, at least destroys the essence of politics.

3) Digital doubles (profiles), which are entirely the poignant result of machine operations and algorithms. In order to elicit a user’s action through these operations, the user’s digital double merely needs to send signals that in turn provoke desired behavior, stimuli and reflexes in the user. The tragedy of the profile subject consists in the fact that the more it wants to make the distinctiveness of its “self” visible through entries, the more forcefully it is modelled by the algorithmic machines. Now the struggle for the permanent and performance singularization of the profile takes place, a permanent task, in which the subject ultimately fails without a sound, because the modular tableau, in which his profile is inscribed, gives the specifications, whereby the actuality makes this so instantaneous that it should actually take your breath away. It could go on like this for ten thousand years. It should also be noted that the glamour of the profile is only one for those who have not made it up the economic ladder, because the truly rich and privileged remain offline.

As a digital double, not only is subjectivity eliminated, but the subject itself is eliminated through the collection of infra-individual data that is assembled as a profile on a supra-individual level. The subject no longer appears. It always arrives too late and cannot bear witness to what it is or what it wants to do in the future; instead, as a user, it merges with its own data profile, which is primarily designed automatically and in real time not by it but by the algorithms. Nevertheless, the users somehow also incessantly create their doubles or their profiles actively, as if they were driven by an invisible power, but all in all they remain the frozen products of algorithmized control systems, on the one hand as individuals who embody a highly effective demand, and on the other hand, as a group of people who are not able to control their data.

In their FB-Newsfeed they get mainly those posts of their “friends”, which already attracted a lot of attention due to the large number of calls and likes. And the user is apparently addressed quite individually, as a singularity, but which proves to be modular, i.e. composed of discrete components in the sense of path tracking. Constitutive of the surveillance capital is the algorithmic creation of profiles behind the back (the codes, algorithms and databases through which it is assembled remain invisible and inaccessible to the user) of those who are assigned a profile tailored to their needs and who nevertheless continue to believe that they are doing all this of their own free will, even obsessed by delusion, to express their own singularity claims with the profiles, thereby putting themselves into narcissistic excitement, without the slightest suspicion that the type of algorithmic treatment defines them in advance and controls them remotely, whereby the automated knowledge production, which is operationalized by learning digital machines, produces an objectivity that should not be underestimated, contrary to all the subjective singularity claims of the excited. For the user, however, it remains with the section of the world constructed and tailored for him, which is accessible to him alone. The digital operations concerning behaviour thus anticipate the individual wishes and assign them to the profile.

Desire economy and control

After the loss of worker knowledge in the 19th century and the loss of knowledge of life in the 20th century, we have lost it in the 21st century. Or, to be more precise, a comprehensive proletarianization of theoretical knowledge is taking place, which can build precisely on the fact that the inscription of the worker’s body in the machinery, as described by Karl Marx, led to a proletarianization of the knowledge of the workers and thus of their living conditions, while later radio and television staged the proletarianization of the knowledge of life (of affects and social relations).7 The exploitation of the “spiritual value” of the individual reaches its peak today: it concerns thinking as such, its consistency and thus also all sciences and their models and methods, and beyond that also intuition and feeling. Weber, Horkheimer and Adorno have described these processes of rationalization in detail, processes which for them clearly lead to nihilism.

Today, the attention of the subjects is entirely captured by the algorithmic governance that Stiegler describes as a reading industry or even as an entropic economy of expression that intensifies the proletarianization of producers/consumers. Parallel to the organization of consumption and the constitution of mass markets by the culture industry, the proletarization of work coagulates into the job industry and defines skills only in terms of employment, which now coincides with the adaptation to volatile jobs. But proletarization today refers not only to economic impoverishment and precariousness, but also to the loss of control over knowledge, savoir-faire and production. Working knowledge and knowledge of life are being replaced by AI machines and the communication of information systems in order to transform as much knowledge as possible into automation, whereby the proletarianisation of knowledge has long since also affected the forms of planning and decision-making.8

Today, it is more and more the unpaid work of consumers, which is actually not work but rather division and employment in unpaid time, that feeds, sets and reinforces the parameters of the automatic and performative collective protentions and retentions (expectations and memories) produced by computerized capitalism. This was preceded by a period in which consumers were exposed to a mass division, the proletarization of the knowledge of life. The marketing companies produced the collective secondary retentions that carried out what certain research departments demanded of them. At the beginning of the 21st century, the analogue culture industry was replaced by the digital culture industry, which integrated the proletarian consumers into the digital-technical system through the thousand threads of networking and at the same time psychologically and socially dis-integrated them, i.e. produced a division resulting from networking, which reassembled the proletarian consumers and producers in the sense that they became nothing more than executive

Organs of information systems will be, of capitalization. In 1993, the Internet established a worldwide digital infrastructure that radically changed telecommunications technologies and led to the total interconnection of many territories in the world, equipping their inhabitants with mobile and/or fixed devices that were compatible with the networks.

Since the early 1970s, the development of capitalism in the metropolises has also led to a global consumerism in the course of globalisation (from which, however, the surplus population in the global South in particular remains completely excluded), which has largely destroyed the thoroughly positive processes of connecting the drives to all kinds of objects that are part of a liberating sublimation. The anti-Oedipal desire that economizes the object by idealizing and transindividualizing it always goes hand in hand with the artificialization of life, which encompasses the technical, and this usually also reinforces and intensifies the power of sublimation when the individual is endowed with a transindividual memory, which Simondon calls the psychic and collective individuation.

At first, vital individuation remains bound to an economy of instincts, which controls animal behavior with the rigor of an automatism, while with the advent of noetic life, which is formed by the libidinal economy (fetish, cult, ritual, etc.), instincts and drives become relatively de-automated, so that the desired objects can be replaced, shifted and reshaped. Even the instincts are thus accessible as artificial organs of fetishisation and transform themselves into drives. And in this way the vital individuation leads to a collective individuation in which the drives are constantly preserved and at the same time changed, in so far as they themselves change the objects of desire and are thus accessible to perversion. Perverse drives are structurally fluid and attach themselves to artificial organs, they are fetishistic and object-oriented.

This specific economy of the libido, which Stiegler certainly puts in a positive light, has completely destroyed capital in recent decades. The drives currently released into panic and excessively into pornography are controlled and at the same time strengthened by an industry of automated tracking and tracing in the social networks. On the one hand, the drives are functionalized, i.e. transformed by mimetic mechanisms into consumerist drives that can never be satisfied, which on the other hand makes such unbound drives even more destructive, uncontrollable and infectious. Both the channelling and the reinforcement of the drives through the application of mathematical algorithms implies an automated social control, while at the same time the drives are driven to a highly dangerous level and are also de-integrated by the control mechanisms. Here Stiegler speaks of a modern capitalist economy of the soul, based on commerce and industrial technologies, although, strictly speaking, we are now already in a hyper-industrial epoch in which total calculating and quantifying capital reigns. In libidinous terms, however, this is a dis-economy that no longer cares about the relationship between libido and objects. For Stiegler, this absolutely calculating libidinal dis-economy is complete nihilism, or, to put it another way, the structural effect of the automation of libido and knowledge, of formalizations within a cybernetic-technological system consists in a calculating and counting nihilism. This nihilistic driving capitalism, Stiegler writes, destroys the sublimation capacity of individuals, who are now driven into a dangerous process of de-sublimation, while the automated digital industries of the desiring economy have simultaneously driven out all real desire. Witness to this are the streamed hardcore floods of images from the US porn industry, gang rape, serial murders during the sex games, child abuse in churches, Olympic sports federations and UNO camps, gender whansinness, etc.

Stiegler writes: “The libido is what tames the urges – as soon as it is destroyed, the urges are unleashed.” (93) But is, for example, the instinct of enrichment really the result of the destruction of the libido, or is it rather a re-occupation in the sense of the deduction of energy from the principle of equivalence or the aesthetics of goods and a new occupation of the striving for profit with energy? The technical objects described by Stiegler would then have to be replaced by

Some of them are excellent as transition and replacement objects. Today’s digital puppets, which also provide the data with their Iphones and smartphones, are a precious resource of capitalization – what more does the surveillance capital want? The libido of the users is focused on the objects of desire, for the surveillance capitalists, in turn, the users’ data is the object of desire on which their “libido” is focused. The fact that the death instinct is at work here as well, the consumption of natural resources and the destruction of the environment (via the production of technical devices) on the one hand, and the connection and exploitation of consumers (for the purpose of value-added production) on the other – this is accepted on the one hand and is intended on the other.

We all become more or less stupid (made stupid), we even become plagued and tormented beasts. The stupefying mechanisms of the industrial epoch, which were due to the comprehensive integration of workers into the system of the machinery, cast their shadows, until later Deleuze, when the associated disciplinary measures and norms gradually lost their effectiveness and television was transformed into a machine of total sensory regulation, spoke of the new societies of control, which also led to new forms of subjectivation and machine subjection. Guattari in particular, with the introduction of the term “individual” in the 1980s, just before the long winter he predicted, already foresaw that the ultraliberal control mechanisms associated with the computer would lead to the liquidation of human decision and judgment. Both behaviour and the analytical mind are now being automated in a sustainable manner and increasingly left to the power of artificial machines and algorithms. Smart networked sensors have long since registered and processed every type of cognition and, above all, every type of behavior in real time, the latter without monitoring necessarily having to be conscious. AI machines analyze how both factors can be anticipated and safely and accurately modified, so that the targeted actors are triggered to perform precisely those real-time actions that allow operators of surveillance companies to further perfect their behavioral and mind modification products. Within the economy of forensic science, the automation of behavioral data virtually creates an artificial, machine-like art of hyper-control, which unfolds and is also monetarized by the outsourcing of thinking into the digitized networks. Everything amounts to the exploitation of ternary retentions, and almost all aspects of the behavior of individuals now generate digital traces, which in turn become objects of calculation and capitalization.

The transdividual (non-transindividual) tracking of data needs to be described in more detail. One must refer here to the respective market segmentations of marketing, whereby the much conjured up individualization of the customers, whom one addresses with needs-based advertising, is more akin to an individualization, an infra-individual division and a decomposition of collective individuation. The determination of desire through its automation, which releases and triggers the bad urges, while intensifying them with network effects, is today being driven forward with the models of neuro-marketing, neuro-economy and the mathematical models of artificial neural networks. Neuromarketing attempts to generate actions in the consumer that do not require the formation of an autonomous desire. And this is based, among other things, on the elimination of constructive interruptions that normally lead to decisions only by trying to integrate a sensomotoric loop into the behaviour, so that there are no more delays between reception and decision and thus no more social differences and thus we can speak of a purely functional cycle or a feedback loop that is accurate to the second, yes, in Skinner’s sense of a pure stimulus-response pattern that operates with descriptive and prescriptive methods – for example, the time interval (delay) that separates the reception of a product from the effect (purchase) is no longer present. The economics of personal data algorithmization continues to reduce the time required for human decision making processes, thus eliminating the “useless” time of thinking and contemplation. It is precisely the functional integration of mental individuation through an automatically associated milieu that processes at the speed of light that

constitutes for the individuals a factual naturalization of the technical milieu and at the same time an artificial naturalization, whereby the individual and collective individuations mutate into collective and psychological divisions that function like a 24/7 insect society.

The algorithmic illness – for Stiegler it is a concomitant effect of the epoch of hyper-control – forces a fatal disorientation: all the great promises of the Enlightenment have become toxic today, indeed they have been transformed into processes of generalized hyper-control of thought. This goes far beyond the control-through-modulation stated by Deleuze, inasmuch as even the noetic faculties of theory are today still short-circuited with the current operators of proletarianization that permeate the ternary retentions. Again, whatever their matter and form, ternary retentions remain dependent on primary and secondary retention, on perception, imagination, expectation and memory, factors that are integrated into the processes of collective transindividuation, which are already different.

Finally, the treatment of data in the form of ternary digital retention in real time and on a global scale – carried out by intelligent machines with the computing capacity of trillions of gigabytes – requires learning and networked systems that absorb, modify and capitalize on the data. But if mental and collective individuation are short-circuited with the digitized processes of automated transindividuation, then unpredictable individuations can also result, but the automated drives remain subordinate to the automated retentional systems, to be formalized by mathematics and concretized by algorithms, that is, the data traces that are generated by monitoring individual and collective behavior must be collected, modified and capitalized. In doing so, however, Stiegler always retains his concept of the “pharmakon”, i.e. he always looks for traces on the Internet that indicate a new form of collective individuation.

Insurance and risk subjects

The control of contemporary risk subjects today requires profit-oriented insurance companies that continuously classify and evaluate their customers by assigning them numbers that relate to factors such as consumption characteristics, interactions, health, education and creditworthiness, which, as we have already seen, also makes the customers, who have thus been degraded to random samples, into divided entities, even individuals. Because the insurance companies try to replace the imperative of uncertainty as far as possible not only by risk calculation but also by certainty, their mobile apps permanently scan the behaviour of the insured, for example the behaviour of car drivers, so that insurance premiums can fall or rise from second to second, and this on the basis of information about how fast you are driving at the moment or whether you are on the phone while driving, whereby machine processes find out about violations of fixed parameters and then punish them. Consequently, customers are broken down into behaviour-oriented tariffs, while machine processes push the behaviour of the insured towards maximum insurance profitability by punishing deviating behaviour with fines or increasing insurance premiums. (zuboff 249) Health insurance companies, for example, have long since made not only health but also fitness a moral imperative; they have distributed risks among the population in such a way that the sick and overweight or simply those who pay too little attention to their health can expect downgrading, sanctions, restrictions or even exclusion from insurance.

Silicon Valley has long since discovered the disease as a market potential or innovation. Large companies in particular are pushing single-mindedly into this market, which in the USA alone has a volume in excess of three trillion dollars. Amazon recently founded a health insurance company, is building trial clinics for its own staff and has taken over the Internet pharmacy Pillpack. Until the data scandal surrounding Cambridge Analytica, Facebook negotiated with hospitals about anonymized health data in order to compare it with that of its users. However, the most advanced player in the race for health is currently Alphabet. Google’s mothership recently developed AI-based software solutions to more accurately determine the course of illness and even the time of death of patients in hospitals. Together with Verily, a subcontractor formerly known as Google Life Sciences, the company has already been researching a contact lens that uses tear fluid to reduce

The smartphone, a multi-sensory glass device on whose surface the unconscious seems to be reflected – at least for the digital health avant-garde – is currently proving to be the best of all behavioristic recording systems.

In particular, the start-up Mindstrong Health by Thomas Insel, former director of the American National Institute of Mental Health and not by chance also former head of the mental health department at Verily, opens up completely new lines of vision. The typing behavior of the smartphone user – how he scrolls, clicks or wipes – is analyzed to create behavioral profiles using pattern recognition, which like compass needles point to mental weaknesses.

Insurance companies, which today have a broad repertoire of risk management models and methods and also the corresponding financial instruments, summarize quantitative elements that define the behavior of the insured in tables and convert them into higher-scale qualitative categories, so that the elements are constantly recombined, new sophisticated incentive and allocation systems are created and thus higher profits are achieved by means of the increase in performance that the subjects documented by risk profiles have to provide themselves. (Lee, Martin 2016: 539) Based on standardized risk definitions, insurance companies thus collect data in order to sort, classify and finally price risk subjects according to very common criteria such as income, family origin, occupation, place of residence, gender and education. There are companies that design a risk score for individuals based on the history of their employment, their rented apartments, their relationships with family members and friends, etc., whereby socially precarious individuals are identified as high-risk on the basis of the classifications and the evaluation of machine procedures, and thus economic and social inequality is algorithmically re-inscribed. Thus, insurance companies are determined to permanently expand their power over the insured, which they do when they emerge as big players on the financial markets with their money.

Today, the creation of certain risk profiles is a sui generis quantification work; on the one hand, it is necessary to create a quantified profile of the person and, on the other hand, to constantly challenge them to improve their economic positions. In this context, with regard to body activation work, medical diagnoses are increasingly merging with the various wellness, fitness and lifestyle offers, with a whole range of portable devices expanding the body in order to quantify body states, i.e. to record states and processes of movement, sleep and stress behaviour, and alcohol and nicotine consumption, in order to finally perfect control and provide instructions for further self-optimisation of the body. This work on one’s own body results in a personal share price of health and well-being in real time (ibid.: 116), which in turn provides new incentives for the actors to provide information to insurance companies and health insurance companies for the elaboration of new tariff models, which the insurance companies convert into personalized cost calculations. This type of self-measurement is being pushed by the insurance companies through the development of technical applications that measure certain behaviour in order to put it in relation to existing tariff systems. A finely woven network of insurance companies, customers and app producers is developing here, within which the differentiation and personalisation of health-related data is constantly being driven forward in order to create further monetary incentive systems and to stimulate the participation of individuals, and all this under the aspect of a playful and free handling of data.

While the responsible risk subjects in the relevant wellness centres are stubbornly occupied day after day with their individual fitness in order to creatively reinvent themselves in infantile-singularised freedom rituals (and thus mostly only adapt themselves to the fact that they are not in the slightest position to be able to change anything, or perhaps even do not want to), they are nevertheless classified by the insurance companies as rather stereotypical and simple protagonists who live a very ordinary life, under circumstances that of the “brought to life job advertisement, a successful synthesis of all the character traits that personnel managers and elementary school teachers wish for in a person. « (Pohrt) And to repeat: So while the successful and especially risk-adverse subjects, especially those of the elites and from

from the upper middle class, constantly invoking the liberation from the shackles of encrusted old identities and a new regime of singularity, while at the same time directing their thoroughly algorithmized passions to all kinds of analog events, to live concerts and music festivals in their allegedly extra-ordinary everyday life, to sports and art events, to all that the event industry stages day after day of ordinary crimes, they continue to be scanned, quantified and motivated extremely efficiently by insurance companies and other private control companies. To possess a successful identity then means to generate and win oneself as a risk subject and to remain basically the same: capable of action, opportunistic and willing to take risks, all in all a new form of stupidity, because the Great Other continues to rule unnoticed in the shadows and the risk subject constituted by capital at least unconsciously feels obliged to it. The upwardly mobile middle class – economically upwardly, culturally downwardly oriented – finds this surface text extremely chic, and some of its representatives, who perhaps even occupy higher functions in the insurance companies themselves, succeed in outdoing each other in the high of hedging their lives, and this in the course of a fast-maker-addicted spewing out of functions, formulas and slogans that mercilessly and at the same time rather mopeyly ornament their own lifestyle. At the same time, the existence of subjects willing to take risks is subject to a control structure (statistics, tables and taxonomies) operationalised by the insurance companies themselves, which classifies, classifies and sorts its clients strictly according to risk categories – for the purpose of establishing a proper and profitable risk profile. Self-optimization processes and control structures are therefore mutually dependent and reinforce each other.

It is pertinent that companies, sovereigns and private individuals today are not so much checked for their creditworthiness by analysing the specific individual case, but rather on the basis of uniform quantitative indices, i.e. credit control by examining the individual case is replaced by the creation of standardised and algorithmised risk profiles. Thus, for the evaluation of consumers, the Fico Score was introduced, an algorithm that can generally be considered an important statistical tool for controlling the neoliberal subject. First of all, scoring in general means attributing characteristics such as performance, efficiency, profitability or solvency to certain entities/subjects, which are then divided into groups according to the scaling of these criteria, i.e. they are classified and evaluated with points, which are in turn weighted and combined to form a credit rating, and finally the total score is used to determine the granting of credit. Insurance companies use the Fico Score to construct the credit histories of customers, companies use it to check job applications and search for optimal locations, health insurance companies use it to make forecasts as to whether patients are taking their medication properly, and casinos use it to identify the most profitable guests. In summary, it can be said that today a dense network of rankings, ratings and other evaluation mechanisms permeates the social fields and applies to almost all activities and areas. Certain credit scorings even go so far as to use secret algorithms not only to obtain and evaluate information on health status, mobility, job changes and personal risk management, but even to draw on data from friends, which, depending on their economic status, can bring advantages or disadvantages for the credit seeker.

Statistical-mechanical procedures are used to determine the probability that someone will service a loan by assigning credit ratings to the customers, which not only decides on the granting of loans in general, but also on the exact conditions of the loans (terms, interest rates, etc.) Today, there is a broadly accessible data material available on the history of debt, market activities and the economic situation of the subjects in general, which is constantly being expanded and incorporated into current risk calculations and evaluations. The trend is to integrate more and more parts of the population into the credit system via so-called credit risk colonization and thus keep them available to generate a surplus for the financial capital. (Mau 2017: 109) When granting loans, the notorious apps are used, which read a range of data from the applications of an applicant’s smartphone, from postings in social media, from GPS coordinates and from e-mails and profiles and then use this data to calculate the credit risk.

by using algorithms to construct patterns that indicate the probability that the applicant will be able to service loans in the future (zuboff 202) And as if that wasn’t enough, this information is in turn used to refine the Fico algorithm and even other algorithms, a side effect of which is to create even more accurate and comprehensive personality profiles that not only include a user’s financial stress level but perhaps even smell their farts.

Rating and Ranking

Today, a new regime of objectivity is generated that not only makes differences and comparisons visible, but also employs new classifications to control the system and the subjects. While rating serves to judge and evaluate certain objects, facts and subjects according to patterns by means of certain techniques, ranking places facts, objects, persons etc. in a ranking. Today, the monetization and economization of ratings and rankings is leading in all possible areas to a permanent restructuring of methods of efficiency, performance-based allocation of resources and budgeting from the point of view of quantifying the increase in profitability by means of input-output matrices translated into figures, whereby areas such as education, health, prisons and even wars are affected.

Subjects are constantly active themselves to further fuel these quantification processes, and present themselves in the media and institutions using fabricated data (such as income, body and weight, distances covered, and health), and are very happy to compare their data with those of their competitors, thus in a way contributing to the numerical hegemony that always surrounds the aura of the objective through the heroicization of number. However, such counts and comparisons are by no means a priori given, but are constructed in quantifying social processes by, among other things, setting up measuring procedures aimed at consensus, which constantly only reinforce desired behaviour that is integrated into rankings according to metric and ordinal differences. This presupposes, on the one hand, the equivalence of the subjects’ behaviour, which is equivalent to a specific indifference towards them, and, on the other hand, the staging of hierarchical rankings that reinforce competition and dictate the more or less or the better or worse of the subjects’ behaviour.

Today, just about every fact, every object or subject can be put into an order by means of ranking, one thinks in particular of the situational fixation of the popularity of every kind of celebrity star, be it a politician, a football player, a model or even a porn star, but the ranking naturally also affects the profitability of companies and institutions, the educational achievements of universities and health care services, the splendour of cities, the taste of food and drink, the dating sites and the lifestyle of the middle classes in the metropolises, the reputation in the professions and it even affects the states, which are hierarchically ranked according to their debt levels. This suggests an apparently objective quantitative assessment that pays little attention to qualitative differences, nuances and deviations, and rather sanctions, excludes or at least relegates the latter to the lower ranks of the social body. The sociologist Mau speaks at this point of objectivity generators that not only quantify social relations and subjects by scaling them, but also visualize these results, i.e. translate them into charts, diagrams and tables. These are by no means purely descriptive processes, but rather performative practices by means of the use of a-significant semiologies that serve to map and generate hierarchical structures and systems that are always politically, economically and socially charged; they assign subjects to specific places within a ranking scale by defining the criteria and procedures by which the places are assigned and occupied.

We have seen that rating is a continuous procedure by which subjects, objects and facts are judged, evaluated and assessed with regard to certain characteristics, performance, profitability and behavioral dimensions translated into symbols and numbers, while ranking goes beyond this in that the objects and subjects assessed are put into a numerical order by differentiation. Both are objectifying classification procedures with which external observation and self-observation are integrated into a system, whereby the performative aspect consists in

that the actors are asked to relate to themselves and others on a permanent basis. (Mau 2017: 76) For the capitalist profiteers of the ranking, free access to large data sets, which the actors provide voluntarily, is the first prerequisite for creating a dispositive characterized firstly by the identification of differences and secondly by the competition for exclusive placements, the latter being by definition unstable and relative, so that the struggle against one’s own devaluation and for outbidding others remains ever present. The purpose of this silent struggle, when we look at the subjects, is to make one’s own position visible, especially when one is already in the front of the field, and this produces very real and not just symbolic effects. Today, the important indicators to which these comparative procedures refer are definitely profitability, efficiency and productivity.
The quantification methods of screening and scoring work in a similar way, but they are more individualized, or rather, they are based on the individual. Here too, individuals are assigned certain characteristics, such as performance and solvency, health and education data or special risks, and the results are naturally eagerly taken up by companies, markets and other organisations, which in turn make selections which, as if it were a natural state, give advantages to some and disadvantages to others. With the attribution of certain characteristics, the subjects are divided from the outset or selected from a larger pool of actors by screening through the application of a certain number of parameters, and are thus only present as samples, but there must always be inclusions and exclusions. Statistical data and procedures are also used in scoring to bring the actors into a fine order of importance (ibid: 104).

Screening procedures are essential for the processing of applications for companies today, whereby external service providers usually take over the execution by sorting people in and out, true to the methods of a dragnet search. In doing so, the companies that monitor them can access a vast amount of data and information that users are constantly producing in the social media, such as information about their circle of friends and living environment, mobility and their own consumer behaviour, and much more. Even with these procedures, places are assigned to subjects in the various sub-areas of economy, culture and politics, thus transforming them by definition into what is shared, and the algorithmic assignments determine their access to optionalities, action potentials and, in general, to the possibilities of shaping life as a positive product that is to be capitalized.

Literature

Bahr, Hans-Dieter (1970): Critique of “Political Technology”.Frankfurt/M.

  • (1973): The Class Structure of Machinery. Note on the value form. In: Technical intelligence in late capitalism. Vahrenkamp (Ed.). Frankfurt/M. 39-72.

-(1983): On the handling of machines. Tübingen.

Mainzer, Klaus (2014): The Calculation of the World. From the world formula to Big Data. Munich

Martin, Randy (2009):The Twin Tower of Financialization: Entanglements of Political and Cultural Economies. In: The Global South 3(1).108-125.

  • (2015): Knowledge Ltd: Toward a Social Logic of the Derivative.Philadelphia.

Mau, Steffen (2017): The Metric We. Frankfurt/Main 2017.

Sahr, Aaron (2017):The promise of money. A practical theory of credit. Hamburg.

Schlaudt, Oliver (2011): Marx as a measurement theorist. In: Bonefeld, Werner/ Heinrich, Michael (Eds.): Capital & Criticism. Hamburg.

  • (2014a): What is empirical truth?: Pragmatic theory of truth between criticism and naturalism (Philosophical treatises). Frankfurt/M.

Strauß, Harald(2013): Significations of Work. The validity of the differentiator “value”. Berlin.

Vief, Bernhard (1991): Digital money. In: Rötzer, Florian (Ed.): Digital bill. Frankfurt/M. p.117-147.

1 Hannah Arendt writes p.445 “It is quite conceivable that the modern age, which began with such an outrageous and incredibly promising activation of all human assets and activities, will finally end in the deadliest, most sterile passivity that history has ever known. With the digital media we live in an increasingly flat ontology in which every event correlates with every other event at mostly the same level of intensity, so that a network of relations is created in which no event should have a specific meaning anymore. In the world of updates, comments, opinions and fake news, the concept of communication replaces that of truth.

2 Randy Martin has in his

showed in his book “Empire of Indifference” that indifference and endless circulation belong together and that even today the asymmetric, small wars still circulate in the global net. (Martin 2007) Moreover, the corresponding interventions revolve around the possibility of circulation, as opposed to the possibility of proclaiming sovereignty. For Martin, this is a shift similar to that of the shareholder who holds the shares of a company to that of the trader of derivatives, who generates wealth by managing risk. The unintended consequence of this risk management, which Martin sees at work in both global financialization and the U.S. empire, is that it simply increases the volatility of what it implies. The result is a cycle of destabilization and derivative wars, a characterization that Martin calls the “empire of indifference. This empire is no longer characterized by progress or development, but only promises its occupants the management of a perpetual presence of risk opportunities.

3 According to Bernhard Vief, the principle of equivalence is inherent in the analogy, whereby the ana-logon corresponds to a template that is placed over two different objects in order to align them, and which of course immediately raises the question of what this template could offer as a so-called tertium comparationis in order to enable a comparison between two completely different objects. (Cf. Vief 1991: 138) Marx cannot be satisfied with the third party’s fixation on the metal weight of money qua objectified work, as Adam Smith and David Ricardo had still done, instead he relies on the axiom of “abstract work” or “abstract working time”, for which he, however, again offers no objective measure (what does abstract work mean as an immanent measure of value?), which is why Vief sees an infinite recourse in Marx himself. This is why Vief makes a shift at this point and bases at least digital money equally on pure difference, which, as he claims, indicates the absence of any measure of value that has today been replaced by the binary code, so that money no longer represents a general equivalent, but is characterized solely by difference. (Ibid.: 139) Apart from the questionable assumption of equating difference with digitality (for Deleuze, for example it is in fact digitality qua binary code that appears today as the medial basis of the electronic form of money, so that money is primarily based on meaningless data, which, however, still have to mean (and refer to), and of course the flows of money need matter/energy in the space of the symbolic, in which they oscillate (transfer and store) between processes and stasis, because, after all, res cogitans cannot do without res extensa. (If the symbolic, although it has no reference to the object or to work, can nevertheless only take place on the material that is wrested from the continuum of time, and operations therefore take place primarily in the space of the symbolic, thus disempowering the time axis to a certain extent, then operations as instantaneous processes need time as marginally as time, even if it is not as irreversible as time outside the symbolic). Like Oliver Schlaudt, Bernhard Vief thus also poses – albeit with a different weighting – the question of the reference of money to a third party. While Vief, however, no longer sees any necessity to refer to abstract labor at all, Schlaudt cites this reference as the special achievement of labor value theory, with which Marx would have identified abstract labor as an immanent measure of value. (Vief 1991: 135f./Schlaudt 2011: 265f.) In fact, a decisive problem arises here for Marx, which we will discuss in the chapters on value and abstract labour.

4Freedom of opinion, that is the free circulation of opinions (not of discourses, narratives and certainly not of truths). The elaborate systems theory put this in a nutshell early on, without, however, considering the fatal consequences: “All communication is social if it disregards exactly what it is about, what it talks about, what it connects to, what consequences it has. Socially, communication is only social from the point of view of no specific meaning or, better, excluding any meaning at all, except that communication always means something. (Peter Fuchs 2001: 112) Freedom of expression is guaranteed. It circulates like oil, capital and jungle camp.

5 This does not mean that Facebook does not censor, because every single post, comment and message is written by employees and/or

machines are read and analyzed to determine if they comply with the company’s arbitrary, largely undefined and opaque standards. In addition, the company also forwards information about political statements, especially from the left, to the police and intelligence services.

6 Just as protocols are everywhere, so are standards. One can speak of ecological standards, safety and health standards, building standards, and digital and industrial standards whose institutional and technical status is made possible by the way the protocols function. The capacity of the standards, depends on the control through protocols, a system of governance whose organizational techniques determine how value is extracted from those integrated in the different modes of production. But there are also the standards of the protocols themselves. The TCP/IP model of the Internet is a protocol that has become a technical standard for Internet communication. There is a specific relationship between protocol, implementation and standard that concerns digital processes: protocols are descriptions of the precise terms through which two computers can communicate with each other (i.e., a dictionary and a handbook for communicating). The implementation implies the creation of software that uses the protocol, i.e. that handles communication (two implementations using the same protocol should be able to exchange data). A standard defines which protocol should be used on certain computers for certain purposes. It does not define the protocol itself, but sets limits on how the protocol can be changed.

7 The fragmentation and, at the same time, standardization of everyday life described by Lefebvre led, from 1970 onwards, to a symbolic plight characterized by the dominance of the audiovisual, analog mass media apparatuses, which heralded a period of strategic marketing, which was then fully implemented by means of the privatization of radio and television. According to Stiegler, the symbolic misery or de-symbolization of the everyday narrative heralded by these developments led to a proletarianization of sensibilities and the destruction of desire, or, in the same way, to the ruin of the libidinal economy. The speculative marketing of the financial industry represents the temporary climax of this development. The mechanization of sensibility and the industrialization of symbolic life is today inscribed in “communications”, which in turn is characterized by the distinction between the professional producers of symbols and the proletarianized and de-symbolized consumers.

8 Where Stiegler still speaks of the proletarianization of knowledge, Zuboff sees in the division of knowledge, which overlays the division of labor, a pathology that today has fallen into the hands of a small clique of computer specialists, a “machine intelligence” and economic interests. If all this is true, and it is also true that pathologies are those of the system, then it is necessary to analyze the composition of today’s capital: production qua exploitation, speculation qua financial capital, plunder qua extraction and marketing of data.

Translated with www.DeepL.com/Translator (free version)

Translated with www.DeepL.com/Translator (free version)

Translated with www.DeepL.com/Translator (free version)

Nach oben scrollen