Luciana Parisi and the problem of the predictability of unpredictability


4 Feb , 2020  

In her book Contagious Architecture, Luciana Parisi calls on us to distance ourselves from two – still relevant today – metacomputational definitions, according to which a) algorithms are determined as finite quantities, or b) as evolutionary interactive models, the latter being based on sensors, processors and feedback processes. Parisi believes that with the affirmation of these two concepts, the understanding of algorithms remains unsatisfactory, insofar as they do not take into account the problem of calculating infinite series of probabilities, a problem that affects probability or the calculation of unpredictability. (Cf. Parisi: 2013)

In contrast to second-order cybernetics, which places the algorithm in the context of the reflexive-systemic, circuit-producing capacity and the reproduction of the system in relation to its environment, in order to think in terms of the emergence of autopoietic media systems, Parisi defines algorithms as “real objects”, as grasping realities, but which by no means reflect the real or constitute themselves through interaction with it. On the contrary, algorithms have ever been infected/determined by the real, indeed they are immanently attached to the real or sewn to it in a very specific way. At this point Parisi places Whitehead’s concept of capture at the center of her considerations, a concept that takes into account both the mode of physical sensation and that of conceptual thinking. One might even speak here of an immanent cloning of the never attainable real (laruelle), a process characterized by the impossibility of reducing the capture of the real to the reflection of the real. With the term “capture” Parisi clearly distinguishes herself from certain theories of perception and cognition, which always demand a series of ontological statements about the real. By no means is something real if it can be perceived by humans sensomotorically or in the form of qualities (colours, sounds etc.). On the other hand, Whitehead assumes that everything that people perceive – or do not perceive – has its own history and capacity for transformation, which shows itself to be the collection of data from the past, present and future. (Whitehead 1979.348f.) The real itself is not dependent on perception, but neither is it dependent on a theory specified by, say, Enlightenment reason, according to which things can only be predicted as true if they correspond to the principles of logic, mathematics or factual evidence. For Whitehead, perception and cognition clearly precede recording and must therefore be regarded as subpersonal. (Ibid.) And the concept of grasp has nothing to do with the concept of projection or reflection, and as a specific treatment of events in which relations arise in the process, it demands sui generis the break, innovation and speculation, the latter understood as a tendency that transforms the grasped from what it is into what it could be.

The concept of capture also encompasses media which Parisi consistently defines as capture machines, inasmuch as they are not only physical machines for recording and cloning data, but also for capturing, processing and producing affects, energies and information – and thus tend to transcend fixedly programmed media structures to something for which certain programs may not have been intended. One might think here of sampling machines that incessantly transform and decontextualize media material. Parisi also attempts to think of media as current worlds, insofar as they are not attributes of something but rather real apparatuses.1

Parisi also follows Whitehead in his assumption that one cannot completely bind oneself to the (philosophical) formalism of the sufficient reason, because any attempt, whether in the course of naive realism or exaggerated social constructivism, to achieve a correspondence between mental cognitions and current entities underestimates the speculative power of reason. Whitehead’s speculative reason is not interested in Leibniz’s sufficient reason, but in the final reason. For Whitehead, finality, however, involves the creation of future, previously unproven ideas rather than simply relying on existing data and facts. Speculative reason implies a rule-based system that includes novelty as a decisive criterion in its judgments, due to the fact that we do not have access to all facts, but are nevertheless bound to an (open) whole that is always in excess. However, Whitehead’s concept of the final reason

should by no means be understood as teleological, but as the affirmation of an open and not complete universe, which cannot be represented even by the highest efforts of speculative reason. Whitehead’s concept thus moves away from formal and practical reasons, also away from the concept of empirical truth, and addresses reasons to the problem of finality – not of critical, but of instrumental reasoning. Real individuals, composed of events, constantly recreate themselves in the process of development and thus pursue a purposeful goal. For Whitehead, speculative reason implies at this point an asymmetrical connection of the efficient reason with the final reason and must thus be understood as a machine of emphasis, so to speak, that always celebrates the new.

Diagrams have long since evolved from their early deductive phases of calculation to inductive modes of computation. In her essay Contagious Architecture, Parisi notes a shift from the early models of computational design, which were based on deductive forms of digital architecture, to more inductive and at the same time material forms, with which the physical properties of matter itself seem to be the motors of simulation, as local behaviours of the material, from which in turn complex structures emerge. (Cf. Parisi 2014) Rather than insisting on an a priori and a passive acceptance of established truths, induction apparently allows the affirmation of adaptation and processing of a material driven by data, which may result in influencing a new design. This form of “computational design” deliberately adopts an anti-representational stance, in that it is more concerned with action, operation and processing, within which computation is primarily a pragmatic effect. Instead of merely designing matter, one now follows the biophysical movement of matter, embedding it in practical functional terms, in a constant feedback loop, in order to stabilize the respective simulations. Thus, the infiltration of the dynamics and impact power of matter, its biophysical temporalities, into computational design implies a tremendous multiplication of algorithmic evolution and the masses of data that push for accelerated, computer-based processing. (Ibid.: 150) The symbolic logic of deductive procedures is not simply replaced, but rather certain abstraction and quantification functions of deductive computation are retained. (ibid.) For Parisi, however, this abstract materialism leads the algorithmization back to physical causes in a still idealistic way and thus neglects precisely the materiality of the calculation itself.

According to Parisi, a different computational design thinking would have to assume a new axiom, which says that abstract data that cannot be grasped by perception or “the data abstraction controlled by algorithmic agents determine the automated interaction in networked, distributed and parallel systems. (Ibid.: 152) As a moment of accelerated automation, induction cannot simply set the goal of mechanizing or instrumentalizing reason and ground in order to thereby provide the formal conditions from which new truths can be established, but rather it must rather allow the incompatibility of the physical conditions of matter with the formal and constitutive reasons, rules and patterns that emerge from the digital automation of space and time to be identified.

Parisi is thus less interested in a new materialism than in the computerized production of physically induced models. She is not interested in the simulation of material behaviour itself, which she still regards as a meta-biological form of calculation that scans the manifold properties of matter for data analysis, but she clearly favours algorithmic processes that problematise unpredictability itself, and pleads for the analysis of open feedback loops that enter the system of computational design as an ongoing process. But ultimately she is also less concerned with design than with a type of computational reasoning itself, which she uses to better understand algorithmically automated processes and their axiomatic ways of thinking and rules. Finally, Parisi asks what algorithmic reason actually contains and how it functions. Parisi thus definitely has problems with the inductive form of a materialistic idealism, the direct seamless Parmedianisian fusion of thought and matter.

Finally, the discrepancy between what algorithms do and our perception of these processes shows that these processes are not created for us, not for the subject.

Furthermore, Parisi is concerned with the problem of imperfection in the context of the Gödel Paradox. She argues that mathematics is not complete, so there can be no finite closed set of axioms or rules that can provide the perfect computable algorithm that encompasses all aspects of the physical processes or data. There is always an outside. In this context Parisi refers to the mathematician and computer specialist Gregory Chaitin and his treatment of the problem of calculating the unpredictable, which he relates to the problem of the probability of holding the Turing machine, which seems to be computable despite infinitely long series. (Ibid: 158) Chaitin refers to the probability to be calculated as omega, which means the limit value of a sequence of numbers, which in turn is supposed to be convergent, increasing and calculable. Nevertheless, the limit of the calculation remains subject to chance; it is possible to calculate probabilities of the point in time when the computer, after being fed with a sequence of random bits, stops to present a solution, but in principle omega remains unpredictable, because an infinite number of digits in omega represent the fact that one digit can be one or zero at a time. No logical process can thus synthesize chance a priori, which means that it cannot be integrated into deductive logic or inductive processes, the latter of which refer to physical causes that are themselves incalculable. (ibid.: 160)

Reality is always more complex than any mathematical algorithm that may be invented to capture reality or describe it in representational terms. Parisi follows Whitehead in saying that we need a new speculative form of calculation to deal with finiteness and limits, one that at the same time raises the question of the unpredictability of the parts that disrupt the reproductive mechanism of the whole. If we transfer this problem to the medial machines, then one of the questions here is what it means when unpredictable or non-computable real numbers enter the medial systems. Complex mathematical algorithms involving all possible variables indicate the problem: Take, for example, the “Monte Carlo” algorithm, which was initially used to solve problems in nuclear physics and later used in stock markets, and which scientists now use to study non-linear technological diffusion processes. The Monte Carlo method is a widely used algorithm to compute the basic excited states of “many-particle systems”; for states without nodes, the algorithm is numerically exact, while in the presence of nodes, approximations must be introduced, such as the “fixed-node approximation”.

Under certain circumstances, the processing of the complexity of data and structures will lead to a point where automatic algorithms will finally be able to process their final reasons in the computational processing of infinite data sets themselves. A post-intentional future will emerge precisely when the machines “consciously” organize themselves beyond human intentions or affective relations to express a real world that is seemingly endless. Thus, algorithms (real objects that are statements) would not simply belong to a machine logic as it corresponds to all previous media systems, but they would point beyond the media as such. The cybernetisation of the media has in part already led today to recording machines of the inarticulable and unrepresentable, which play with moments of subjectivation, recording, storage and utilisation of infinite information. The current media machines continuously produce series of floating sensations, affects and opinions, which are displayed, stored and also traded on Facebook as likes and dislikes, as “likes”. In this way, the media machines constantly create their own immanent clones of reality by homogenizing the differences floating in the affects or by only simulating a homogenization in order to intensify the game of (controlled) differences. The regulating and at the same time inhumane moment is here a filter bubble algorithm that is always present, it moderates the communication flows, so to speak, without it even being noticed; the algorithm regulates and garlands the behaviorist communication education on Facebook with “likes”.

A post-cybernetic logic of computation now has to show that too

Coincidences, accidents and malfunctions are necessary components of interactive processing (with possibly conflicting data), which can potentially lead to crashes in the system. Parisi speaks in this context of the programming of the crisis, which Brian Massumi, to whom she refers here, defines as a pre-emptive power or an affective anticipation of the catastrophe. Parisi does not, however, aim at a mere anti-logic of chance/accident, because the factor chance/accident has long since become a functional element of the survival of the dynamic, biopolitical control systems themselves. As capturing realities, algorithms, at least Parisi believes, offer yet another concept of machine, numerical aesthetics outside the ontologies of chance/accident. This aesthetics must then be defined in terms of a specific automatic way of thinking, which Parisi in turn discusses with reference to Whitehead’s concept of speculative reason. Algorithmic entities are understood here as abstract and current, but not virtual. As actualities, they not only possess relations to other current entities, but they also process them in different ways, whereby contingency is not to be thought of outside, but rather as intrinsic to algorithmic processing. This is where Parisi approaches Greg Lynn’s conception of objects and “blobs”. (Lynn 1998) Lynn argues that one should generally not start from closed entities that have exact coordinates in space, but rather from surfaces defined as vectors and transformations. Objectiles are spatiotemporal events. One can see that Parisis’s project of redesigning digital algorithms by means of the concept of capture wants to move towards an irreflective, automatic and at the same time open thinking. However, not everything has to amount to a completely open series of processes or absolute contingencies. With this line of argument, Parisi repeatedly refers to objects of which she says that there is always more than one and less than many, always more than a closed unit and less than infinite series or completely open processes. However, it is not entirely clear why she always speaks of objects and not of events in this context, as is usually the case in Whitehead’s philosophy.

1 It must be assumed here that a general matrix of recording machines is formed, which are in a differential relationship to monetary capital. The relationship of technical and socio-economic machines is a differential one and is at the same time determined by them in the last instance.

Parisi, Luciana (2013): Contagious Architecture. Computation, Aesthetics, and Space.Cambridge MA.

-(2015): Automated Architecture. In: #Acceleration2#. Ed.: Avanessian, Armen/Mackay,Robin. Berlin.

translated by deepl.

Foto: Bernhard Weber


, , , , ,