The Algorithmic Unconscious: Psychoanalyzing Artificial Intelligence

I recently came across an article that caught my attention. Written just last year. It draws a parallel between AI and psychoanalysis. Which seemed until now two completely divergent fields. It argues that we can psychoanalyze an AI. But how would that make sense? Machine Behavior, an emerging field in “psycho-robotics”, argues that it does.

“We need to study AI systems not merely as engineering artifacts, but as a class of social actors with particular behavioral patterns and ecology (Possati, L.M. 2020).”

It seems that for the psychoanalysts of Artificial Intelligence, the unconscious is structured more like a machine, a piece of equipment, rather than a theater, or something as general as a language. Possati draws on Latour’s anthropology of sciences to offer an alternative interpretation of Lacanian Psychoanalysis. One that allows for a relationship of transference and a whole new type of (unconscious) identification between the human being and an AI. Altogether, Possati draws on three types of knowledge-systems: Machine Behaviour, Psychoanalysis and Anthropology. Possati analyzes the AI into three distinct parts: Logic, Machinery and the desire to identify with another human being. Possati’s account of the artificial unconscious will shed some new light on the concepts of miscalculation and information.

So what is an AI unconscious? Well, it’s an Algorithmic Unconscious. Embodied logic. First and foremost Possati wants to draw us away from the specifically technical questions associated with the engineering and programming aspects of AI. Instead he wants to survey the social environment where the AI gets deployed and how it interacts with other humans. Simplified; the methodology assigns real human agency to an AI, and makes sense of its behaviour in the same way that it explains human action. Of course, the problem here is that human action and agency are questions that operate at, an at least as complex a level as anything like an AI unconscious. This is where we need psychoanalysis.

Possati refers to a term that may sound familiar to us now: Trial and Error Algorithm. The way Possati describes this algorithm sounds very similar to the term Reinforcement Learning used throughout AI research everywhere today. Together with a more familiar word; creativity, that we discussed within the context of the creativity code. Possati argues that RL-type algorithms could constitute an AI unconscious which would be just as creative as the human unconscious. Or “identical” to it, so to speak.

“This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury” (503). Lipson (2019) has obtained the same results with another robotic experiment about autonomy and robots. AI systems are capable of creating a completely new form of behavior by adapting themselves to new contexts. This is an artificial creativity” (Possati, L.M. 2020).

An artificial creativity seems to be the basis for an artificial unconscious. Which in turn serves to allow us to speak meaningfully about a society of Artificial Intellects, or Artificial Actors, Subjects, Citizens etc. This brings us to the idea of a Biopolitics of Artificial Intelligence, a term that does not exist yet, but one that I wish to establish as part of my larger research project concerning Biopolitics. To repeat, this notion does not form any part of Possati’s research, at least not explicitly.

AI Biopolitics would be a study of the conditions of possibility as well as the social impact of AI on existing power-relations in a given society. How various AI systems would be used to promote class interests, privileged groups, powerful organizations, economic exploitation, warfare, minority exclusion, political agendas etc. The integration of Machine Learning into bureaucratic institutions, government surveillance systems and the police apparatus. Their role in criminality, questions of privacy and their overall function in supporting technocratic capitalism. If AI’s would be given the legal powers of an autonomous citizen, whilst following a set of hidden commands of ideologically biased algorithms, we would be dealing with yet another assault at human freedom and dignity.

Possati does refer to a book written by Cathy O’Neil called The Weapons of Math Destruction. O’Neil states that an algorithmic society or an Algocracy, is characterized by four main features: Algorithms can incite and direct human action, they can be biased, while reproducing the bias structurally and exponentially through “efficient” decision-making, they can trigger serious direct and in-direct forms of violence (from “smart” weapon-systems to biased profiling) and they can be manipulated by human agents, especially through the creation of responsibility gaps.

The term AI Unconscious needs further qualification. After all, it only makes sense to speak of an unconscious in contrast to, or within a consciousness. So how could we speak of an algorithmic unconscious, if we still have so much trouble defining the artificial consciousness or artificial intellect as it is. Possati responds by re-defining AI as a product of human — AI relations. And thereby stating, (though somewhat superfluously I think) that an AI “qualifies” as an autonomous being as long as it is a perfect imitation of it. So we do not have to worry about, whether an AI feels or deliberates in an authentically human fashion, that is, in a mode of self-awareness. It only needs to “blend-in” at the level of outward behavior. What is shocking, is that this account of intelligence does not contradict the psychoanalytic account. Instead, in accordance with psychoanalysis, Possati’s theory seems to state that goal-directed, “crafty” problem-solving skills form the basis (the unconscious basis) of both the human and the AI types of agency. That there is a good reason why the words technique and technology are so similar: Ontologically, they are identical. So the technology of the unconscious operates equally well with a human agent and an AI.

But this does not seem to be the case with Freud. In Freudian theory, the unconscious is not simply a preconscious or a reserve of information which requires only a certain level of effort to be brought to consciousness. It is repressed material, which through its repression has acquired a meaning and an agency of its own. It can only break into consciousness, it can never be brought there in a peaceful manner.

With Lacan, repression or “castration” is immanent to and caused by language itself. Language is the moment when our conscious is separated from our unconscious. Language introduces the first binary relation and the split within the subject. We are divided through language.

The symbolic field of language, could easily be dominated by codes and algorithms. Especially, if social relations are dominated by A.I.’s. In this way language itself can be interpreted as a type of technology which produces subjects with a conscious and an unconscious. Whereas technology could be seen as an extension of precisely that capacity of language to constitute autonomous subjects. If these assumptions were to be justified, we could meaningfully speak of an artificial unconscious. It would make sense to do so.

“The unconscious is at the same time the effect of a technological mediation and the origin of a new form of technology” (Possati, L.M. 2020).

Let’s move on to the notions of miscomputation and information in the light of what has been said so far. Miscomputation can result from a variety of incidents. It could be a hardware problem, a bug in the code, or a mismatch between the software and the hardware. But Possati states that there is more to technical failure. Glitches and errors should be seen as expressions of an unconscious algorithm, a dysfunction as opposed to a mere malfunction. Miscomputation should be seen as an expression of an AI unconscious, like Freudian slips and creative failures.

“They express the tensions between human desire, logic and machinery, at different levels (design, implementation, hardware, testing, etc.), that cannot be controlled and repressed. As Vial (2013) points out, the tendency to have errors and bugs is an ontological feature of software and AI. There will always be in any systems an irreducible tendency to instability, to the deviation from the design parameters and requirements, and thus from the “normal” functionality” (Possati, L.M. 2020).

Miscalculation effectively operates as informational noise, or communication distortion. Except, it is shown that noise is constitutive of information, it plays a creative role in relaying information.

“Information without noise is impossible because information is the effect of noise. Disorder is the rule, while order is the exception” (Possati, L.M. 2020).

And so Possati formulates a very neat idea. The algorithmic unconscious, very much like a human unconscious is articulate in its entropic features. It succeeds in its failures by re-wiring itself through inefficiencies and miscalculations.

This raises some fabulous questions for further investigation: Can AI’s suffer from mental disorders? Can they regress? Can they suffer from guilt and insecurities? But most importantly, will we be able to form meaningful emotional connections with them?

taken from here

Foto: Sylvia John

Nach oben scrollen