If one follows the theory about technical objects as developed by the French theorist Gilbert Simondon (Simondon 2012), and then the statements of Frédéric Neyrat in the anthology Die technologische Bedingung (Hörl 2011), the task for today’s hypertechnized societies is to fundamentally rethink the identity of nature and technology (there is no total integration of nature into technology, nor is technology to be understood as an expression of nature), which has always been disturbed, by rethinking the machines or the technology itself. technical objects, which by no means prolong human organs like prostheses or serve human beings only as a means of use, are first affirmed in their pure functionality, so that in their inconclusive supplementarity they can finally attain the status of coherent and at the same time individualized systems, the localizations of which are embedded in complex machine associations or networks. (ibid.:37) Günther Anders had spoken almost simultaneously with Gilbert Simondon of machines as apparatuses, but of a world of apparatuses that had made the difference between technical and social designs obsolete and thus generally rendered the distinction between the two areas irrelevant. (Anders 1980: 110) According to Günther Anders, every single technical device is integrated into an ensemble, is itself only a device part, a part in the system of devices – the apparatuses -, with which it satisfies the needs of other devices on the one hand, and stimulates the need for new devices by its mere presence in other devices on the other. Anders writes: “What applies to these devices applies mutatis mutandis to all … To claim that this system of devices, this macro-device, is a ‘means’, that it is available to us for free use, would be completely pointless. The device system is our ‘world’. And ‘world’ is something other than ‘means’. Something categorically different.-” (Anders 2002: 2) Or to put it in Simondon’s terms, in view of our post-industrial situation we should speak of technical objects whose elements always form recursions and entertain inner resonances with each other, while at the same time the objects stand in external resonances with other technical objects, in order to be able to play out their own technicality in ensembles as open machines. In addition, many technical entities develop a plural functionality, executing instead of one function several functions within a machine system, such as the internal combustion engine, whose cooling fins assume the function of cooling and amplification when they counteract the deformation of the cylinder head. (Cf. Hörl 2011: 97) Simondon has not adopted the profoundly pessimistic view of post-industrial technologies as found in Günther Anders’ work; rather, in those technical objects that elude the hylemorphistic juxtaposition of form and matter, as still conceived in the working model qua matter to be formed by tools, Simondon has just identified a possibility of technology approaching nature’s autonomy, a tendency that leads to the dynamic unity of the technical objects themselves, in that these and the other objects are dynamically closed. a. incoporate a part of the natural world by creating associated milieus, connecting their interior (resonance of different parts and multifunctionality of parts) with the exterior, with other objects, be they natural or artificial. Yet the technical object cannot completely separate itself from an excess of abstraction that distinguishes the so-called artifical, heteronomous object, and Simondon attributes the power of abstraction above all to the human being as his constitutive contribution to technology, who thereby prevents the technical objects from concretizing themselves in open structures and playing off their tendency towards autonomy. (Ibid.: 154) Simondon, however, is by no means tempted by the thesis that in a post-industrial future everything living must be rigorously subordinated to open technical ensembles; on the contrary, Simondon advocates a social concept of technical ensembles or open machine associations in which the human being coexists with the “society of technical objects”. But where man intervenes too strongly and dominates the technical, we are dealing with heteronomous artificial objects, whereas the technical object at least tends towards an autonomy (it cannot completely abandon abstraction), which includes the natural moment, i.e. the unity and consistency of a machine system. Paradoxically, for Simondon it is precisely the artificial that prevents technology from becoming natural. (ibid.: 154) According to Simondon, abstract artificiality is always due to a lack of technicality, whereas the technical object is to be concretized in coherent processes, whereby each local operation of the technical object is to be integrated into a comprehensive arrangement of the mechanical ensembles. (Hegel defines the concrete as that which includes the relational, while the abstract is regarded as one-sided or isolated. The terms “concrete” and “abstract” therefore do not designate types of entities, such as the material and immaterial, but are used to describe the way in which thinking is related to the entities. Thus the abstract can prove to be the most concrete and the concrete the most abstract. A materialist concept must be able to explain what constitutes the reality of a conceptually formed abstraction without hypostatizing this form. It must be able to show how abstractions are treated by social practices, the latter being more than just work processes that shape matter, when they ultimately reposition themselves in a very specific way, i.e. concretize themselves in relation to technical objects, as proposed by Simondon.) Thus the technical object always functions in associated milieus, i.e. it is connected with other technical objects or it suffices itself, and in doing so it must always respect nature.
Simondo’s technical objects refer to their embedding in net structures, whereby he foresees the contemporary coupling of technical objects to the digital, information- and computation-intensive ecology of new media already in the 1960s, the dispositive of digital, transformational technologies including a neo-subjectivity deformed, non-intentional and distributed by machine speeds. Almost in tune with cybernetics, Simondon is aware that the machine is not used as or like a tool, but rather that it is operated. Technical objects are neither prostheses of the human being, nor, conversely, can the human being be completely dissolved as a prosthesis of the machines. First of all, the technical objects should be conceived purely in terms of their functionality, and this with regard to their explainable genesis, in the course of which, according to Simondon, they increasingly concretize (not abstractize) themselves on the basis of an immanent evolution, beyond the adaptation and expediency of their use or their fixation as means. However, the technical object is not a creative agent in its own right, it remains confined in an economic and scientific context, and the process of its concretization asserts synergetics, the interaction with other functional subsystems by modifying and completing the functionality of the technical object. The movement of the concretization of the technical object includes the organization of functional sub-systems in which the technical object matures into a technical ensemble, which in turn is characterized by comprehensive social, economic and technical processes and their structuring. Concretisation also means the tendency towards innovation, in which a series of conflicting requirements are satisfied by multifunctional solutions of individual technical objects, creating causal cycles to integrate the respective requirements. Technical elements (parts of the machine), technical individuals (machine as a whole) and technical ensembles (machines as part of social, technical and economic systems) are each already in a dynamic relationship that potentially releases a process of technological change. However, the economy is not dominated by the media/machines; rather, capital and its economy determine the technological situation.
According to the French theorist Frédéric Neyrat, the disturbed identity between nature and technology refers to the “hyperjekt”, which describes the mechanical autonomization of technology vis-à-vis human actants as well as the material substitution of the material by the artificial, without, however, having to assume a total integration of nature into technology. (Technology as a detachment from nature, as a substitution of natural substances by plastics, and as a detachment of technology from man by means of mechanical autonomy. It must be assumed that machines and their materials are in a relationship of interference). One can identify the hyperjekt as a substitution and autonomization milieu (materials and machines) of the technical, independent of subject/mind and object/nature, whereby one should not speak of associations, but of superimpositions with regard to the contextualization of the two milieus, if one discusses the internal and external resonances of the technical objects.
Post-industrial technology, e.g. Gotthard Günther’s concept of the transclassical machine, is in between nature and spirit, because precisely because of the processes of double detachment it is forbidden to reduce the transclassical machine purely to scientific-human creation, since it follows an independent logic of reflection. It is about the transclassical machine, whose essential function is to deliver, transform and translate information. (The information articulates the difference that makes a difference, as Gregory Bateson sees it, but not insofar as the smallest unit of information, a bit, as Bateson assumes, is simple, but as Bernhard Vief writes in his essay Digital Money (Vief 1991), is given twice: bits are immaterial, relative divisors, they stand for a movement of differentiality that is neither present nor absent, and thus the binary code, the binary sequence of numbers, can be positioned as an effect of the alternance that articulates them, as an effect of the alternance that positions them. As Lacan has shown with the example of the cybernetic machine, the articulated is of the same order as the symbolic registers, whereby the switches of the switching algebra represent the third of that order: The articulation, which itself is neither open nor closed, indicates the possibility of the purely positional states.) The transclassical machine can be mapped neither to the object nor to the subject; rather, it holds in it a trivalent logic: subject, object, and the transclassical machine as hyperject. The hyperjekt belongs neither to nature (object) nor to spirit (subject), and thus it is subject to an exteriority, which, however, is by no means to be understood as the outsourcing of the interior of a subject, but rather indicates an independent “region of being” – it contains a trivality that proves its incompleteness per se, because it does not synthesize the opposites (subject and object) – on the contrary, these non-trivial machines (Heinz von Foerster) remain removed from the complete analysis as well as from the synthesization. At this point, however, the concept of technical being must put up with the question of whether the media of technical objects can be captured ontologically as modes of dispersion into open spaces or the dispersion of space itself. In the last century, Second Order Cybernetics had created its own constellation of concepts (feedback, autopoiesis, temporal irreversibility, self-referentiality, etc.) that had long since immigrated into mathematical models or computer simulation. Although this does not dissolve the material substrate or the physicality on which those processes sit, the autonomous-immanent relations and interactions of a multi-level complexity reign here, with complexations taking place in each individual contingent process: Systems transform random events into structures, and conversely, certain events can also destroy structures, so that a single system indicates a continuous fluctuation between disorganization and reorganization as well as between the virtual and the actual in almost every conceivable case. Gotthard Günther has above all tried to present the ontological implications of these forms of knowledge and has introduced the concept of polycontextuality. In a polycontextural world context, the transclassical machines that operate in a rift or as the third between subject/spirit or object/nature are scattered over a multitude of objects, qualities and differences. (ibid.: 165f.) These transclassical machines are conceivable as ensembles of universes, each of which can raise an equivalent demand for objectivity without having to represent or even eliminate the demands of other ensembles. In it, the concept of context describes a continuum of potential reality that changes shape with each quantification. Günther therefore speaks of the contingency of the objective itself, whose difference does not convey an intelligent hierarchy, with the consequence that in these technological fields we are dealing less with classifications or taxonomies, but with decision-making situations and flexible practices. On the other hand, the computers known to us so far operate only auto-referentially, i.e. they cannot process the difference between their own operations and the environment within themselves.
Frédéric Neyrat introduces the so-called holoject as a fourth level of technology, which in contrast to the hyperject as a medium of absolute connectivity refers both to the subject and to the object, to the superposition of both components, which is always continuous, unstable and endless. (Ibid.: 168f.) As such, the holoject inexists, but can transfer its continuity properties to the hyperject and thus give it shape, which we then finally call an organless body, a mechanical ensemble that is machinic in all its parts. There is no fusion of areas (subject/object, knowledge/thing, etc.), but rather, according to quantum physics, there are superpositions in which, for example, two waves retain their identity when they generate a third wave, which, however, neither represents a synthesis of the two preceding waves nor their destruction, but rather indicates a non-commutative identity according to François Laruelle. The concept of idempotence, a term from computer science, includes a function that is linked to itself or remains unchanged through the addition of further functions, so that the generative matrix persists as a non-commutative identity through all variations without ever requiring transcendence. According to Neyrat, idempotence is the characteristic feature of the holoject, which, according to Neyrat, is “both, as well as, as well as …”, whereby, with regard to idempotence, the function of the “and” is primarily focused on, i.e. on the insistence of subjunctive syntheses, and this leads us to an open technical structure in which the technical object, as an “in-between”, already appears with a certain delay and as an inexhaustible reserve of the technical medium itself. In this context, Mc Luhan’s formula “The medium is the message” does not postulate an identity of terms, nor is the message degraded to a mere effect of technical structures; rather, something resounds in the “is” that returns in the medium as difference, virulence, or disruption without ever being shut down. (Cf. Lenger 2013) The message of the medium occurs in the fact that difference only joins a media “together” in order to return as disparation in it and to repeat itself as difference, thus simultaneously undermining its previous technical modes and modifications. At this point Jean-luc Nancy speaks of an eco-technique of intersections, twists and tensions, a technique that is alien to the principle of coordination and control, and he describes this pure juxtaposition, this unstable assembly without any sense, as a structure. (Cf. Hörl 2011: 61)
Alexander Galloway has defined the black box as an apparatus in which primarily the inputs and outputs are known or visible, whereby the various interfaces establish its relation to the outside, with regard to the cybernetic situation. While Marx’s fetishism critique of the commodity was still about deciphering the mystical shell in order to penetrate to the rational core, today’s post-industrial technologies, on the other hand, which constantly produce new goods such as information, are open and visible in a shell that functions purely via interfaces, while at the same time the core is invisible. (ibid. 269) The interactive interfaces occupy the surfaces in the black boxes and usually allow only selective passageways from the visible outside to the opaque inside. Black boxes function as nodes integrated into networks, whose external connectivity is subject to an architecture and management that remains largely invisible. According to Vilém Flusser, the camera can be regarded as exemplary for most devices and their function. His agent controls the camera to a certain extent by controlling the interface, i.e. by means of input and output selections, but the camera controls the agent precisely because of the opacity of the inside of the black box. For Simondon today, digital technologies with their visually attractive and “black-boxed” interfaces would prove to be highly problematic. These technologies usually derive their popularity from a suggestive aesthetics of the surface, and they do not attract the user because they offer him, for example, a liberating possibility of indetermination of the technology, of flexible couplings between the machines and with the human, as Simondon may consider worthy of consideration. Simondon insists that the fundamental movement of technological development does not consist in an increase in automation, but rather in the emergence and evolution of those open machines that are susceptible to social regulation. With the black boxes, on the other hand, we are dealing with technological objects that are described as ensembles of readable rational functions, and this with regard to their input-output relations, which are to run as smoothly as possible, whereby on the one hand their core remains unreadable, on the other hand their material construction in the discourse at best still exists as a rather negligible speaker. Simondon challenges us to look inside the black boxes at a glance.
In addition, the problem of connectivity with regard to the aspect of non-emitting, transmitting machines must be taken into account, which have a plurality of processes and effects of their own, and this proves to be a matter of the highest economic relevance, if these machines then produce multiple machine functions and effects in and with their complexes, contrary to a one-dimensional chain of effects, yes, these functions even release explosions of previous machines and thus produce new conjunctions. “The spheres of production and energy technology, transport, information and human technology indicate vague field definitions of machines in which the machine-environmental is already inscribed,” writes Hans-Dieter Bahr (Bahr 1983: 277), and in principle the mechanical ensembles and processes can thus be described as transmitting information, information into which also natural, economic and social structures and processes, including their postponements, complexifications and layer changes, enter, whereby it is by no means just about communications, but also about absorptions and filters of the information itself, about the manipulation of the data qua algorithms – and thus the respective relations and programming/functionalizations would also have to be decoded inside the technical objects themselves, which, however, would virtually prevent the hegemonic discourses on technology. In contrast to the darkening of the interior of the black boxes, Simondon pleads for a discus that focuses on the perfect transparency of the machines. The aim here is to recognize potentials and relations that are sometimes already condensed in the machines, and which then concretize themselves qua a functional overdetermination of the technical objects. For Simondon, the machines represent something like mediators between nature and man, which we have to grasp, among other things, in the dicourses on media. The machine, as Hans-Dieter Bahr explained in his paper Über den Umgang mit Maschinen, could therefore be described less as the concept of an object “machine” than as a discursive formation. (ibid.: 277). Every (digital) machine is functionalized by programming, whereby it quickly becomes apparent, however, that the mere description and maintenance of the constructive functions does not necessarily mean that a machine has to “function”; rather, the manifold dysfunctionalities of the machines must be taken into account, which can cross the functioning system of input and output relations at any time, accidents, crashes, crises, etc., and which are the result of the “machine”. (It may well happen that a deceleration of the machine speed is cost-saving for an economy as a whole, think, for example, of the (external) climate costs that do not arise, although the deceleration for the individual capital increases costs; a machine may well become obsolete due to competition between companies, i.e. from an economic point of view, although it is still fully functional in terms of materials, a constellation that Marx called moral wear and tear). The in-between of the machines or the machine transmissions, respectively, enormously block a teleological view: The outputs of the complex machines are today less than ever commodities, which are mostly already further machine inputs, but generate much stronger complexes of effects including the unintentional side effects, with which the machines themselves mutate into the labyrinthine and therefore constantly need new programming and functionalities for orientation and control in order to maintain their input selections and outputs, because the machines are supposed to function through the supply of programs, materials, information and by controlling the input-output relations.
Possible outputs of the machines can be utility values, but also other dysfunctions, which disturb the continuous operation – but most of these outputs are inputs into other machines. So machines emit energy and information streams that are cut or interrupted by other machines, while the source machines of the emitted streams have themselves already made cuts or withdrawals from other streams, which in turn belong to other source machines. Each emission of a stream is thus an incision into another emission and so on and so forth, so Deleuze/Guattari see it at least in Anti-Oedipus. (Deleuze/Guattari 1974: 11f.) At the same time a double division emerges with the mechanical incisions, whereby the concept of the incision does not emerge as meaning from an inside, in order to then be translated or transported into the inside of another, but rather in the communication of the cut something is displayed that is already “outside” as an outside, for example a network of machine series that flee in all directions. (Lenger 2013) Each communication or translation takes place over an inexpressive incision into which the net divides. This division remains unexpressive in the message, but only because an open space is opened that allows everything to be communicated and expressed. And these divisions take place today via interfaces. Interfaces are usually referred to as significant surfaces. An extension of conceptuality takes place when it is conceived as transitions or passages, described as thresholds, doors or windows, or when it is furthermore understood in the sense of a flexibilisation of input selections as fields of choice, whereby we can then speak of an intraface that identifies itself as an indefinite zone of translations from inside and outside. (Cf. Galloway 2012) The intraface opens the machine structures in an indefinite way to associated milieus, which we have ever encountered with open machines or processes in which several intrafaces are always integrated, as effects of the translations that work or don’t work, whereby even this binary distinction is still questionable when one considers that machine transmissions simply cannot do without side effects and disturbances.
Now the cybernetic hypothesis is characterized precisely by the fact that it defines the technological object or the technical system by the sum of the inputs and outputs, whereby black boxes (computers, data objects, interfaces, codes) permanently have to eliminate dysfunctional inputs. Among the unfavorable inputs are climatic conditions, incomplete classifications, influences of other machines, faulty programs, wear and tear, and it is up to the cybernetic machines to absorb these structures and correct them according to their own criteria, and these transformations in turn affect the outputs. When machine systems select and transform different types of input, this means that a multitude of economic, social, natural, cultural, and legal functions belong to their inputs as well as to their expenditures. (Bahr 1983: 281) Here, the disciplining function of the feedback mode of cybernetic control loops, the attempt to feed back outputs to inputs in such a way that dysfunctional inputs can be faded out or eliminated in the future, or at least more functional selections of inputs can take place than before, becomes quite evident. Cybernetics is thus characterized not only by automation, but above all by the mechanism of input selections. If now the human element is selected out, one speaks of the automaton. This of course contradicts a posthuman situation as Gilbert Simondon had still imagined it: When technical objects individualize themselves, they are always also in external resonance, whereby the resonances in between of the technical individual and the associated techno-logical milieu insist, creating a recursive causality in between. Cybernetics, however, wants to subject the in-between entirely to its automatism or to its input selections, whereby the identity of living beings and machine is thought of purely from the point of view of the automaton, while Simondon conceives this asymptotic analogy between the human and the machine from the perspective of the machines that have always oriented themselves towards open spaces and associated milieus, which in turn corresponds to a certain affirmation of non-selective inputs and a variety of stratagems that continue themselves as incisions, divisions and crossings of the machine milieus. Machines as media configure intermediate worlds insofar as they indicate a mediation without extremes or poles, since the poles (inputs and outputs) often prove to be further stratagems. Technological objects today are usually embedded in digital networks, with the associated architecture of the protocols regulating their exchange of information among themselves, which thus proliferates over a complex topology of densifications and scatters, and even from this, Simondon would probably still gain a cultural force. This does not use the machines, but confirms that cultural dignity lies precisely in the recognition of the pure functioning of the technical objects, whereby the human being only enters into a dialogue with the technical ensembles and this dialogue can lead to a true transindividuality. We speak here generally of technicity. If the input and output selections are considered on the basis of their intersecting contingencies, then we are not talking about with more automatic machines, but actually with open machines – and concretization then means appreciating the contingency of the functions and the interdependence of the elements in order to do justice to their inner resonance, which makes them probable machines that cannot be measured against the ideal of precision, but to display different degrees of precision by expanding their range of use, expanding into new areas, until they occupy, or at least affect, all fields of the social, cultural, economic and technological, as in the case of computer technology, though in a usurping manner. It is the process of disparation between two realities, in Deleuze’s sense the disparation between the virtual and the current, which ultimately activates information differently from the digital and sets in motion a process of individuation that comes from the future. The information is located less on the homogeneous level of a single reality, but at least on two or more disparate levels, e.g. a 3D topology that knots our posthuman reality; it is a fabrication of reality that folds the past and the future into the present, as an individuation of reality through disparation that is information in itself. If individuation encompasses the disparation of the virtual and the current, then information is always already there, already the present of a future present. What is called past or present is therefore mainly the disparation of an immanent source of information, which is always in the process of dissolution. For Simondon, the idea of the capacity or potential of a technical object is closely linked to his theory of individuation. The individual object is never given in advance, it must be produced, it must coagulate, or it must gain existence in an ongoing process. The pre-individual is not a stage that lacks identity, it is not an undifferentiated chaos, but rather a condition that is more than a unit or an identity, namely a system of highest potentiality or full potentials, an excess or a supersaturation, a system that exists independently of thinking.
Digital networks today not only encompass the globe that they themselves generate, but they also penetrate into the social microstructures of the capitalist economy, whose human agents in turn subject them to permanent addressability, online presence and informational control. (Lenger 2013) Being “online” today condenses into a hegemonic form of life; constantly mobilizable availability is part of a flexible normalization that affirms users in toto with the practice of everyday wellness, cosmetics, and fitness programs until, in the course of their permanent recursion with the machines, they finally completely incorporate the processes of normalization. In the postscript on the control societies, Deleuze described human agents as “divuenes,” mostly a-physical entities, infinitely divisible and condensable to data representation, which, precisely because of the effects of a-human technologies of control, at some point act like computer-based systems. At present, we can assume that there is a homology between post-Fordist management methods, which propagate non-hierarchical networks, self-organisation, flexibility and innovation in heroic litanies, and the neurosciences, which describe the brain as a decentralised network of neuronal aggregates and emphasise neurological plasticity (Christine Malabou) as the basis for cognitive flexibility and adaptation. According to Catharine Malabou, neuronal and social functions influence each other until it is no longer possible to distinguish between them. At least there is the possibility that human species, with the rapid translation of their own material history into data streams, networked connectivity, artificial intelligence, and satellite monitoring, tend to become a decal of technology. If the events – mobile apps, technological devices, economic crises, digital money, drone wars, etc. – process at the speed of light, then the reference systems of traditional techno discourses will definitely be destabilized, and their definitions and hypotheses as useful indicators of what the future of hyper-accelerated capitalism could still bring will increasingly fail. The obscuring of clearly defined boundaries between bodies and machines, the interpenetration of human perception and algorithmic code, the active this leads to the injection of a fundamental technological drift into the social, cultural and economic, while the economy and its machinery continue to determine the technological. Implemented in social reality, the important signifiers of technological acceleration today include concepts such as “big data”, “distant reading” and “augmented reality”; they conclude the words still bound to gravity and capital as power in the weightless space of the regimes of computation. There will be more migrations into this weightless space in the future, for example, of thoughts in mobile technologies, and we will at the same time have to deal with an increasing volatility in the field of the digital financial economy, triggered by trading algorithms based on neuronal networks and genetic programming, we will be completely immersed in the relational networks of social media, and last but not least we will be confronted with a completely distributed brain that modulates experiments in neurotechnology. Nothing remains stable, everything is in motion.
Algorithms need to be discussed in the context of this new mode of automation. Usually the algorithm is defined as an instruction to solve a problem, and this happens in a sequence of finite, well-defined steps or instructions, in sets of ordered steps that operate with data and computable structures implemented in computer programs. As such, the algorithm is an abstraction, its existence is integrated into the particular program language of a particular machine architecture, which in turn consists of hardware, data, bodies and decisions. The currently existing algorithms process ever larger amounts of data and thus process a growing entropy of data streams (big data), they generate far more than just instructions that have to be executed, namely potentially infinite amounts of data and information, which in turn interfere with other algorithms in order to re-program the various algorithmic processes. From an economic perspective, algorithms are a form of fixed capital into which social knowledge (extracted from the work of mathematicians, programmers, but also user activities) is objectified, whereby this form of fixed capital is not usable in itself, but only to the extent that it is drawn into monetary capitalization, whereby it can also drive and force it further. In any case, algorithms are not to be understood as mere tools, but one should finally understand that they actively intervene in the analysis and processing of the data streams in order to translate them into economically relevant information and also to utilize them or to generate the respective orders on the financial markets self-referentially and to conclude them successfully. This means that the far greater share of financial transactions in high-frequency trading today runs via a pure machine-machine communication, which the human actants are no longer able to observe, because the data and information flows flow at a-human high speeds via invisible apparatuses and still liquefy the distinction between machine, body and image. (Cf. Wilkins/Dragos 2013) Although the composition of human and ahuman entities varies, as in the various HFH systems, in extreme cases some financial companies eliminate almost any human intervention in the automated transactions, so that the data read by the machines self-referentially flow continuously into and back from the algorithms controlling the processes, so that trading decisions can be largely automated. Every human intervention, on the other hand, complicates even those financial processes in which specific errors and problems have arisen. Some algorithms are already being physically implemented in silicon chips: The combination of hardware and software. In the HFH Systems section, the contemporary financial economy is thus largely invisibly shaped by algorithms – for example, certain programs permanently scan the financial markets to see whether the indicators fixed by algorithms reach certain levels, which then become effective as buy or sell signals. There are current versions of algorithms such as the volume-weighted average price algorithms (VWAP), which, in conjunction with econometric methods, generate complex randomness functions to optimize the size and execution times of monetary transactions in the context of global trading volumes. (ibid.) We are dealing with other types of algorithms that try to identify and anticipate such transactions, or there are non-adaptive, low-latency algorithms that “process” both the differentials of transmission rates in global financial networks and the correlating material transformations that enable those informational relations. Genetic algorithms are used to optimize the possible combinations of price fluctuations of financial derivatives and instruments and to ensure the optimal finetuning of each parameter within a financial system. (ibid.) The implementation of algorithmic systems in computerized financial economics represents a qualitatively new phase of the real subsumption of machinery under capital, indicating the transition from cybernetics to contemporary scientific technicity, the so-called “nano-bio-info-cognitive” revolution, which sits on distributed networks and supposedly friction-free systems (superconductors, ubiquitous computing). (Cf. Srnicek/Williams 2013) (Real subsumption under capital includes that every aspect of the production process, technology, markets, labor
Let us now come to the current machine ensembles and their environments, the digital networks and their complex ecologies of the material and the economic, in which high-frequency commerce (HFH) is integrated. Digital technologies have long since permeated the entire financial system – with the HFH, the fluid planetary movement of financial capital, which is a drive to the violence of pure market abstraction and to the substitution of material experience by the various models of computer simulation, easily stands out from the production and orders of consumption to continue in a closed, self-referential, semiotic system that permanently forces the calibration and recalibration of machine-machine relations. The process of decimalization (the pricing out of assets qua decimal and no longer by fractions), which has been self-accelerating on the financial markets since about 2000 and has further reduced the spread between purchase and sale prices, reflects and fires the need to move ever higher and more time-consuming transaction sums on the financial markets, so that the ever smaller falling spreads can still be compensated at all. The traders keep the positions of the respective deals only for extremely short periods of time, while they only realise small spreads, so that the high profits result solely from the quantity and speed of transactions. With so-called direct trading, which above all allows large institutional investors to bypass all mediators (including the stock exchange) between themselves and the respective trading partner, as well as the existence of almost completely automated infrastructures, it is becoming increasingly urgent for financial companies to access the latest technological innovations in order to manage, control and, if at all possible, control them in the sense of an accelerative dynamic. Thus, in the current HFH, digital automation infiltrates almost every aspect of the trading process, from analysis to execution to back-end processes, with all components controlled by algorithms. An HFH system must perform the fine-tuning of all programming and memory capacities, the manipulation of individual data points and packets, the acquisition of databases and the selection of inputs, etc… There is therefore a clear trend towards hegemonic automation on the financial markets. (Marx had at least rudimentarily described automation in the ground plans as a process of absorption of the general productive forces – part of the social brain – into the machine or the capital fixe, which also includes the knowledge and technological abilities of the workers (Marx 1974: 603), who now follow the logic of capital rather than still being expressions of social work.) If one visualizes the history of the relation between capital and technology, then it seems quite obvious that automation has moved away from the thermomechanical model of classical industrial capitalism and has integrated itself into the electronic-calculating networks of contemporary capitalism. Digital automation today processes in detail the social nervous system and the social brain; it encompasses the potentials of the virtual, the simulative and the abstract, feedback and autonomous processes; it unfolds in networks and their electronic and nervous connections, in which the operator/user functions as a quasi-automatic relay of the continuously flowing streams of information.
remixing of the edges of humans, animals, plants and inanimate objects – all