I gave a two-person talk with Nan Z. Da last fall at U. Michigan on the theme of “Digitality and Intent.” Here is my text for the talk, slightly revised and expanded.
In prepping for this event, Nan and I decided on the theme of “Digitality and Intent” as a way to address one of the hard problems in computational approaches to literary study, namely the relationship between measured textual features and human intentionality. So let me begin by adopting the naive posture: what does “intent” mean? I propose to think about intent along three different lines.
In the context of digital inscription, intent might first appear as measurable qualities of users. One might pose the question “what is a user’s intent?” And the answer typically revolves around the notion of features. A feature is simply some differential that is measurable. And the assumption among engineers is that these features inscribe user intent; the features are authorship in some basic sense. I stress that features really are any kind of differential whatsoever — provided the differential can be positively measured. In mathematical terms, intent might be inscribed as a vector (or set of vectors), given that vectors are a simple way to register a differential. (Vectors are taught in school as having “a direction and a magnitude,” although most computer languages store vectors simply as a pair of two points, starting point and ending point, which accomplishes the same thing.) And I still think McKenzie Wark wrote the definitive theoretical book on this, with her A Hacker Manifesto (2004), which elaborates the concepts of vector and vectoralist.
Given these features captured as measurement vectors, data science is often described as a “multidimensional” science. After all, a dimension is just an axis on which to measure. So if you have 17 measurement axes, you have 17 vectors, and you have a 17-dimensional space. This might not make much intuitive sense to everyday human experience, but a 17-dimensional space is completely normal in data science, just as an 88-dimensional space is normal in piano sheet music. The recent success of AI has a lot to do with these kinds of high-dimensional spaces (and the Linear Algebra necessary to calculate and transform them). Of course there are many problems with this approach, which I will merely hint at without attempting to resolve: Are you measuring accurately? Is your measurement model distorted by unwanted biases? Do measurements (no matter how complex and nuanced) effectively capture users’ intent?
If intent-as-measurable-features has an almost “legalistic” quality — in terms of establishing facts as evidence — let’s look next at a very different way of thinking about intent. There is a long tradition of understanding intent as intentionality. I mean this in the phenomenological sense. But what does intentionality mean in phenomenology? You will recall that phenomenology, as formalized by Edmund Husserl, Martin Heidegger and others, is a way of thinking about subjective experience. (Of course we find the term “phenomenology” as well in Hegel’s key 1807 treatise, The Phenomenology of Spirit, and the concept has roots going back to ancient Greek philosophy.) In phenomenology “intentionality” refers to the way in which all subjective experience is oriented toward something. You might even say it’s about the frontal quality of experience, or having a front. This comes directly from embodiment. We mean the frontal sensory organs, as opposed to the dorsal or para. Looking forward, rather than looking back. We humans might seem to be symmetrical left to right, but that’s not even true, and regardless we’re most certainly not symmetrical from front to back. Phenomenology exists, in part, to affirm this claim, and to use it as a basis for consciousness and thinking overall. Thinking is “frontal” because thinking orients itself toward an object of thought.
In one of the great critiques of phenomenology, Sara Ahmed has shown that intentionality is a discriminating technology, and hence supports a series of racial and sexual sortings (or hierarchies). Specifically, Ahmed wrote compellingly about the frontal-dorsal distinction in terms of what gets foregrounded versus what gets backgrounded. A family portrait might be foregrounded in a home, so that whiteness can be backgrounded (or can assume its position as ground). This is one example she gave in the book Queer Phenomenology.
Okay, so here we have intent as intentionality, namely a kind of low-level claim about the necessarily oriented nature of subjective experience. In a sense: you are an arrow pointing toward something; subjects are vectors. And the grounding for the vector isn’t found in Linear Algebra, rather it’s found in the psyche itself, in the structure of consciousness, in the embodied subject.
Let’s not forget that Edmund Husserl, a crucial figure within phenomenology (although certainly not the only one), was adamant about grounding arithmetical number not in logic or pure rationality, but in what he called “a psychological characterization of the phenomena” of number. In other words, arithmetical number had no universal anchor in rationality or logic — that was Gottlob Frege’s position contra Husserl — instead, number was ultimately anchored by a psychological phenomenon, something effectively outside of, and prior to, number. I see this as a direct strike against computationalism, even though Husserl wrote his Philosophy of Arithmetic in 1891, prior to the advent of modern digital computers.
Many years later in the 1960s and ’70s Hubert Dreyfus explicitly used phenomenology to explain the limits of computation. He was arguing against AI, and many people believe — myself included — that he won that argument pretty definitely. Today’s renaissance in AI isn’t because Dreyfus was wrong; you might say that Dreyfus was so right computer scientists threw out many of their starting assumptions (during the so-called AI winter) and retooled their discipline from the ground up around a whole different set of principles absent in the Dreyfus debate from 50 years ago. Namely, principles drawn from empiricism.
Let me move to a third way of thinking about intent. As described by Gilles Deleuze, both in his solo work and in his collaborations with Félix Guattari, intent may be understood as intensity. For Deleuze, intensity (or sometimes “the intensive”) refers to the provisional increase of a quality or affect. Using a loose metaphor, we might think of intensity as the velocity of the affect, or better yet its acceleration. Ask yourself: is an affect cohering or decomposing? Is a quality persisting as a singularity point? As a threshold? Or is the affect dissipating, following a line of flight, virtualizing into an adjacent space? These are the accelerations or decelerations of the affect. For Deleuze they characterize intensity.
Philosophy, of course, has long concerned itself with categories like thought and extension. You find this in Descartes, Spinoza, and many others. I suspect Deleuze was having a bit of fun with that second category: not extension but intension; not the extensive but the intensive.
To repeat, intensity in the Deleuzian tradition is about focusing on the provisional coherence of qualitative terms like affect. Because of this, qualities are here understood in terms of immanence (or remaining within themselves), rather than through some logic of externalization or deterritorialization. The logic of intensity is also a logic of immanence.
As a side note, but a note that is hugely important for digital theory, I stress that this is tied up with a pretty relentless rejection of metaphor, figuration, language, signifying processes, and representation more generally, paired with an unambiguous embrace of the Real, and a rejection of the Imaginary and the Symbolic (to put it in Lacanian language). The book Anti-Oedipus, for example, is 400 pages of semio-phobia. Signs and the symbolic order — they simply do not survive Deleuze & Guattari’s relentless onslaught. Even a book like The Logic of Sense, the text where Deleuze addresses language directly, is equally allergic to mainline linguistics and semiotics in, say, the Saussurian tradition. That’s a characterization not a criticism! Deleuze finds his theory of language elsewhere, in C.S. Peirce perhaps, or through the Victorian nonsense fiction of Lewis Carroll. This is one of the reasons why I insist that Deleuze is one of our foremost analog philosophers; and that he has almost nothing to tell us about digital media.
“Intent” also naturally opens the question of the intentional versus the unintentional, a.k.a. accident…along with a cluster of related notions like randomness, entropy, glitches, bugs, and so on. Given the time constraints I don’t really have time to explore them here, except to say that randomness, broadly conceived, is both highly common in everyday computing practices, while also serving as the outer limit of the technology itself. This seeming contradiction is why computer scientists will frequently talk about “pseudo-randomness” but almost never about pure randomness.
To summarize the above, “intent,” in the broadest sense, might span these three domains: (1) structuralism; (2) phenomenology; (3) theories of quality/affect/immanence. Or at least these three, there are certainly others.
I claim that digital media can only really handle the first domain. I make this claim here intuitively if not also dogmatically, although I believe the claim can be adequately supported. Instead of trying to elaborate the claim here, I will highlight its most important corollary, namely that digitality is a variant of structuralism. Both structuralism and the digital begin from a linguistic/symbolic model. They both register the world along binary axes. They both put great emphasis on logics, codes, and discretized marks. There are even some explicit connections that link the two, in for example Lacan’s Seminar II. So, again, my position: if you want to do digital theory you probably need to be doing some version of structuralism (whether you want to or not!). Or at the very least you need to account for the structuralist turn in theory, which I consider to be one of the high water marks for digital theory. Such theory has yet to entirely metabolize the consequences of this. (I presented this argument in a recently pushed essay titled “The Golden Age of Analog,” the punch line being that the golden age of analog is today, in fact, while the golden age of digital thinking was already several decades ago during the heyday of structuralism.)
Now back to intent-as-features… It’s clear that today’s data science is more empirical and positivistic than “high” structuralism. I acknowledge that these features are not Claude Lévi-Strauss’s features (death/life, self/other); they’re not A.J. Greimas’ features either (something and its negation). Nevertheless we’re dealing with features that are legible through a binary economy. So maybe digitality is a kind of “larval” or adolescent structuralism! It deals with empirically measurable features, yes, but has yet to make sense of the symbolic order that subtends it.
To repeat, if digitality is competent at anything, it seems to be competent at the first type of intent (measuring features), but so far pretty incompetent at the second and third types (namely, subjective intentionality; and desiring-production understood as affective immanence). And the only way digital technologies have won any successes in the second and third types is due, in crass terms, to forms of simulation such as the Turing Test. Which is why I maintain, somewhat ruefully, that any theory of the digital can and must stand firmly within metaphysics.
taken from here
Foto: Stefan Paulus