Economics is a regional knowledge that is constructed with the help of specific discourses and forms of knowledge – mathematics, statistics and epistemology. Knowledge is produced about an object (which allegedly exists independently of the discourse) and under certain conditions or ceteris paribus circumstances, this knowledge can be specified as a mathematical model. In comparison, Laruelles concept of “economo-fiction” or “generic science” is radically different from the economic sciences – economo-fiction is actually a new discipline which, among other things, aims at a non-Marxist interpretation and a transformation of the formalism of the economic sciences, including the Marxist critique of economics. Economo-fiction” includes axiomatic and generic knowledge, i.e. a genuinely generic theoretical practice. Laruelle defines the generic as follows: “A type of sciences or knowledges [connaissances] sufficiently neutral and devoid of particularity in order to be added to others more determined and co-operate with them, transforming them without destroying them or denying their scientific nature. They are capable of being added to others acquired in a more ‘classical’ way without unsettling what the latter take from their domain of object and legality, i. e. capable of transforming knowledge without philosophically destroying it “1 (Laruelle 2008)

The following should be noted from the outset: To distinguish econometric fiction from economic science for the sole purpose of positioning it as a new regional knowledge is something we want to avoid with Laruelle (otherwise econometric fiction would be pure economics as science). On the other hand, assuming that there can definitely be no regional knowledge qua economo-fiction, this means again to subsume it completely under the aegis of philosophy or non-philosophy. Economo-fiction, on the other hand, aims at a new theoretical practice of performance, at a new organon, i.e. it tries to transform the regional knowledge of (Marxist) economics in a non-representational way, among other things, by using quantum theory and fractal geometry, and this implies, first of all, to destroy any common-sense ontology that is itself still present in Marxist economic criticism (such as Marxist labour value theory). By means of a specific theoretical apparatus, economo-fiction certainly invents its own scientific models, objects and relations, in particular by cloning the Marxist material of economic criticism. And if it uses mathematics, it certainly does not do so in the manner of a badiou, which, with the use of set theory, once again combines the ultimate obligation of knowledge to exercise ontology. With the insight that models do not represent an (economic) world, a methodological debate is opened, which problematises the integration of mathematical models into discourses (game theory), narratives and even economo-fiction. With Laruelle, we also sharply oppose (economic) theories that call for a kind of transcendental numéraire or a universal substance, as found in the concept of utility in neoclassical theory or in the concept of abstract social work in labour value Marxism. Both theories establish a circle that makes every economic event or object stagnate in its peculiar self-referentiality or tautology: this commodity is worth X euros because it gives the buyer the amount X of benefit or, alternatively, because working time X is stored in the commodity, we know that the price of the commodity must be X euros. 2

When econometric fiction takes on economic structures and objects, it does not do so in order to represent them by means of linguistic theories and discourses, but to transform them exclusively by cloning the theoretical material, which can have many sources, and this definitely requires new scientific patterns – hypotheses, deductions and experimental tests. (Cf. Laruelle 2014: 66) A generic economic theory must necessarily be freed from its philosophical “vices”, but in return it should by no means mix with empiricism, rather it should adhere to it in a scientific understanding, treating it, among other things, as a hypothesis that opens a theoretical space for theorem-without-theory and models (math-without-mathematics). (ibid.: 172) In doing so, theory does not synthesise empiricism, just as empiricism is not the measure of theory. The identity of empiricism and theory can only come about in the last instance, i.e. only in the mutual reference to the real, thus excluding any direct or imaginary synthesis or mixing of empiricism and theory. In Althusser’s terminology, the talk of empirical-theoretical knowledge would be based on the imaginary fusion of the real object and the object of knowledge. The non-Marxist “theory”, on the other hand, insists on the immanent practical way of recognizing facts that are always problematic, and this means that we cannot analyze data without the addition of theory. At times, the concepts and categories seem to resist empiricism, but in so far as they claim “truth”, they are always related to empiricism, which here, however, does not act as the sole measure or judicial authority for theory. Without the study of the facts, there is no development of theory.

Economo-fiction turns to the given (which is already theoretical material) to clone it. This does not imply any reflection of economic reality, but requires transcendental-performative description according to the real, using Laruelle’s term “adequate-without-correspondence”. With regard to Marxism in particular, it is by no means a question of emptying it of its content or, alternatively, of re-arranging it with new content in order to recompose it according to current socio-economic realities. Rather, its materialist, economic and political contents should be treated as simple symptoms that must be reinterpreted and finally transformed in order to transform thinking according to reality (capital) into thinking according to the real; the real always escaping reality in one way or another. For economo-fiction this means that it is not in any exchange relationship with capital-reality, but also not with the real.

Any use of the term “economic world” as an a priori or as a mnemotechnical mapping is based on philosophical thinking, one could even say with laruelle that here the words “world” and “philosophy” are interchangeable. For if the theoretical body of the economy has a “world”, this implies that all possible economic discourses can be translated into philosophy, which then once again establishes itself as a meta-discourse over the regional knowledge of the economy. To subvert this hierarchy, the following thesis should be put forward: There is no economic world! To this extent, econometric fiction would then be characterized as a non-knowledge rather than a knowledge (episteme/techne), to the extent that econometric fiction de-conceptualizes, among other things, the material of Marxist economics and the social sciences, whereby the new economic models that may result from this de-conceptualization do not represent the world, but rather subject it to a radical critique by first destroying the existing theories and models and replacing their claim to absolute sufficiency with contingency. Economics could then only be understood as the science of non-conceptual social relations. Here, no new economic “world” is being created, but, to use Roland Barthes’ words, the first step is to make the lingustically expressable unexpressible.

Many of the problems that the humanities have with the natural sciences simply arise from the fact that one is used to thinking exclusively discursively or in terms of concepts. The economic figure of Homo oeconomicus, for example, is usually criticised only because it does not pay enough attention to social and political relations. But it must be possible to carry out radical economic criticism without the direct involvement of politics. Here we immediately encounter the problem of exogeneity. To create an economic model with all possible accuracy, it would indeed have to include the models of all other sciences. However, economic models only deal with very specific causalities such as prices, production factors, human agents, production relations, ecologies, etc. And they divide these factors into endogenous variables, which are determined in the model, and exogenous variables, which are determined outside the model. Exogeneity remains interesting here, however, because it affects all the different disparate orders (political, social, biological, etc.) of causality.

If an economist calculates a liquidity premium statistically, he can proportion it indirectly, i.e. according to the variables of a model, without having to assume a totalising factor (proportion-without-proportionality). Econometric methods that do not work discursively try in the best case to determine the values of exogenous variables via such models themselves. They do not refer “correctly” to an object, but they process, through a pragmatic use of numerical properties within a quasi tautological model, in order to generalise certain economic cases non-totally. Roughly speaking, econometrics works exclusively with numbers and not with words. Philosophy has no equivalent for this way of determining exogenity and must therefore synthesize all possible causalities, i.e. philosophy always treats exogeneities only in the context of its own decisions. And if then certain phenomena of finance, such as exotic derivatives, cannot be translated into the language of philosophy, the philosopher considers them unrecognisable or pushes them into the “black box” of the financial system, perhaps to open up a moralising discourse about bankers.

The model makes no difference between the fact and the fact that something is a fact (factuality). The model, when defined by the use of certain conditions, performs a quasi-autological deduction of its endogenous relations, continuously nullifying or eliminating differences and objects. If we speak of mathematical models, this means that conceptual knowledge or epistemes are degraded to secondary knowledge, or in other words, conceptual knowledge or epistemes are a by-product of factuality. In the best case, there are then still epistemological questions for a “philosophy of economics” when they are related to economy as science. One has to state that econometric models mainly make use of numerical properties when they generalise a case and thus do not represent anything at all. Most of the time, however, even in mathematical model theory, the correlationist framework of philosophy is not completely abandoned, in which a “correct” economic model is exactly that whose internal relations somehow still refer to current economic objects. (Cf. Meillassoux 2008: 25)

Let us first take a rough look at the approach of model theory: With the help of a radically pragmatic, neutral use of language, axioms are first introduced – one starts, so to speak, with zero and then sets a series of conditions for variables that can be used in certain theoretical situations. In a certain sense, this language, when it concerns economics, must be convertible into mathematics or statistics, but first the “interpretation” of a set of economic phenomena is made and this is stripped of any contingency in order to produce an elaborate tautology qua mathematical models, while at the same time external distinctions are transformed into exogenous parameters. One could probably say in Laraell’s terms that the critique of economics is characterized at this point at least by a constant non-learning, since conceptual knowledge is transformed into non-knowledge. It labels itself rather than describes something.

Compared to this still scientifist position, generic economists start with conceptual models based on discursive disciplines and reduce them to their non-conceptual “essence”. In analogy to Nils Bohr, economists try to elaborate how and by which scientific and technical means discourses have the capacity to produce meanings, in order to relate them to models of operativity. One takes conceptual statements from the ontology of a discipline (What is economy?) and shows in detail how these interfere, rightly or wrongly, with mathematically constructed models. Heisenberg wrote the following from the perspective of quantum physics: “What we observe is not nature itself […] but nature that is exposed to our question. (The) scientific work in physics consists of asking questions about nature in the language we have, and trying to get an answer through experiments that we carry out with the means at our disposal”. (Heisenberg 2006: 85) From a correlationist perspective, on the other hand, the classical economist constructs a model, a grid or a syntax to lay these over the world, and the truth then arises from the correspondence of theory and world.

Let us assume that there is a multiplicity of disparate discourses that include certain (theoretical) phenomena, problems and issues. An economist chooses a specific topic from them, tries to model the discourse, and this under the assumption that certain descriptions of the phenomenon eliminate contingency (in relation to data problems). An internally consistent modelling and interpretation that eliminates contingency must be found, and the model is then considered correct. If it is “correct”, then it is the only possibility that is inherent in the model, which makes other possibilities or contingencies disappear. Exogenous variables remain the same or have the same function in relation to the model, even though they may well assume different values in different situations. Although the inclusion of a large amount of data in the model is desirable, it does not seem possible to apply the concept of experiment to economics without further ado, since the (scientific) experiment destroys the knowledge that previous experiments on the system have produced. The economy, on the other hand, must limit the contingency of each experimental event in order to de-conceptualise it gradually until it can be integrated into the relational relationships of a specific model. At the same time, economy-as-science works with the operative sampling of existing discourses and reduces them to useless metaphors. This science, by including the generic method, would be understood as a non-writing and a de-conceptualising, which does not exclude contingency, but at least limits it. (The term economo-fiction describes economics from the perspective of a non-correlationist theory. It refers to the fictionalist school of mathematics).

Let us pursue the problem of economic modelling further. In his book The Global Minotaur, Yanis Varoufakis has referred to an economic theorem that he helped develop, which seeks to demonstrate that “solvable economic models cannot process time and complexity at the same time. (Varoufakis 2012: 169) In economic modelling, one either pays more attention to complexity (relations between variables, coefficients and parameters) and usually develops static equilibrium models, or one prefers time and decides to formalise dynamics, in particular by using non-linear differential equations and designing non-static equilibrium models. Simultaneity of complexity and time now means designing and discarding models that depict real-time price movements and transactions, and thus allowing the inclusion of crises in the models, which cannot succeed when models transform the economy, which at the same moment deprives them of their processuality.

Economic models use the modern axiom concept of mathematics, whereby the axiom is set “arbitrarily” (assuming ceteris paribus assumptions – if-then) and the application is purely based on logical consistency.3

For an operationalisation of the relations of total capital, this means that non-linear dynamics of highly complex systems have to be constructed which at the same time “correspond” to the competition between the individual capitals; the models have to take into account both the manifold feedbacks in the dynamic time courses and the simultaneous interactions of different individual capitals (adequate-without-correspondence). The distinction between useful distribution functions whose dynamics are modelled with stochastic equations (macro level) and the calculation of random fluctuations on the micro level cannot solve this problem. (Cf. Mainzer 214: 219) Philip Mirowski has shown that the model of dynamic stochastic general equilibrium (DSGE) is a very special case where a one-person economy is in agreement with the overall economic system and therefore a reconciliation of micro- and macro-economy can occur. (Mirowski 2015: Section 5. Kindle-Edition) The very existence of heterogeneous individual capitals calls for alternative equilibria and does not allow for generalisation. So new axioms must constantly be added or old ones subtracted. But how can one counter the undecidable propositions or the forces of infinite sets, which cannot be grasped axiomatically? How does axiomatics based on countable models relate to a continuum or to jumps? And finally, how do financial-mathematical axiomatics and the coherent risk measures corresponding to it behave in relation to flows in mathematics (intuitionism) that are adequate to deterritorialised money and are just escaping the coherence of calculus? In the best case, then, the measuring measure must be shifted into the becoming itself, or, to put it another way, a simulative model generated by means of the extremely high computing power of a supercomputer would have to have the capacity to map and/or measure the flows of money capital as well as their rhythms and monetary transactions in real time (today, at least, people are refraining from considering models as mere representations of economic reality). One possibility is to subject the models themselves to stress tests, which test their robustness for extremely fluctuating scenarios including the presence of jumps and breaks (the worst case of models that is, so to speak, out of line deserves special attention), in order to find effective solution methods or calculate probabilities on the basis of the collection and evaluation of gigantic masses of data. (ibid.) Even improbabilities, one thinks of Taleb’s Black Swan, are thus still to be integrated into the distribution and management of risks, especially those of derivative risks, which arise with the collection and recombination of huge masses of data, machine-specific risks. At the same time, the algorithms always remain on the hunt for what lies outside their territory and its probabilities; they are virtually defined by what they are not yet and what they may never become.

Let us go one step further. It is considered a valid tenor of heterodox economics that the general economic theory of equilibrium has neglected the question of dynamics and crisis-prone growth, and that even its expanded methodology has not led to the hoped-for proof of the stability of equilibria. The Sonnenschein-Debreu-Mantel theorem in particular points to this (the models of the general equilibrium theory always remain at its general level). In economics, the various theories of equilibrium have often been presented as modelling the actions of rational agents. However, this should be treated with extreme caution! It is possible that these models take far too few factors into account, so that stable economic equilibria cannot necessarily be derived from them. Furthermore, the assumption of the rationality of economic agents is far too pompous, if one considers the results of “Behavorial Economics”, according to which factors such as herd behaviour, imitation and rules of thumb tend to influence the actions of agents. Newer concepts such as “Econophysics” therefore modulate the theory of the rational agent more finely, introduce solutions for diametrically interactive conflicts in the course of applying game theory, and thus try to problematise the identity of the agent and his egoism in view of the many roles he has to play, without, however, completely abandoning the subjectivist or correlationist thesis. However, without being able to provide more precise criteria for economic stability, the concept of equilibrium remains a pure model construct. For this reason, we will continue to ask under which conditions the interactions of the agents can lead to equilibrium – whether Smith’s “gravitating” actually takes place. In this way, the models become more and more dynamic. Due to the difficulty of being able to solve the problem of stability within the framework of models and their ceteris paribus assumptions at all, the questions of the real existence and unambiguity of equilibria arise again. At this point, the position formulated by Marx and even F.A. Hayek, that macroeconomic flow dynamics are processes that take place behind the backs of agents, can no longer be dismissed.

A whole range of further objections can be raised against certain types and methods of modelling. On the one hand, the formal structure of a model (the relations between the elements) may be too differentiated to be somehow related to “real economic variables” (the model then remains purely tautological in fact), on the other hand, the model may be internally simplified, so that the preconditions made with regard to modelling become increasingly important. The problem will be difficult to solve with crude illustration theories according to which models depict economic reality or refer to its phenomena. If, on the other hand, the performative act of the model is overemphasised, it is imperative that the model be able to demonstrate its place of use and its operational success. What is at stake is not only the internal, non-discursive logic of the model itself, but also its relation to a reality that cannot itself be grasped substantively or phenomenologically if one wants to escape the convertibility or reciprocity between theory/model and reality. Reality could therefore only be defined as reality with reservations, it would at least have to be supplemented with the concept of virtuality. (Cf Strauß 2013: 180) Reality is then neither the object and/or result of a conceptual-syntactic discursivity nor that of a non-discursive model. Its “definition” as real virtuality is more about what escapes theory, mathematics and the model, and finally any representational objectivity.

Let us take a closer look at the various theories of equilibrium and their models: From the point of view of physics, the following can be said about statistical model theory: If a lake changes its water level according to its input and output of water, then equilibrium theory studies the water level of the lake at a certain point in time and then again at another point in time. This may still be possible for determining the water levels of lakes, but what this theory does not explain at all are the dynamic movements of waterfalls and storms. Faced with curved water or waves, the equilibrium theorist does not have a suitable mathematical instrument to explain these movements.

Most schools of economics include in their discourses an equilibrium version (ISLM-Keynesianism, Sweezy-Morishima-Steedman, Equilibrium Marxism, Marshall’s Competitive General Equilibrium; cf. Freeman 2015/ Sweezy 1971) and a non-equilibrium version or a temporal version (Keynes and the temporalism of Kalecki, temporalist interpretations of Marx’s theory of capital accumulation such as that of Andrew Kliman, Austrian marginalism, etc.). But mostly the equilibrium theory remains the dominant paradigm. The (static) equilibrium theory is a normative standard that eliminates dynamic motion – and this is attempted to show with simultaneous equations that imply that all significant quantities of the model are the same at the end of a given period as at the beginning. From all these equilibrium theories, it is hardly possible to prove crisis-like phenomena that would actually have to be introduced here as endogenous features. As a consequence, external causes outside the system must then be assumed for all crisis phenomena, such as a wrong monetary policy, oil shocks, incompetent government policies, irrational behaviour of economic agents, etc. If the economic system tends a priori to perfect reproduction, then logically there can be no serious deviations from the established equilibrium. And if deviations do occur, then politics must regulate the – by no means always perfect markets – in such a way that they are ultimately in line with the ideal of equilibrium.

In most cases, the variables of the model are divided into two groups: endogenous variables such as prices and quantities and exogenous variables such as politics, culture or psychology. What is decisive here is that the economic equilibrium theory distinguishes between endogenous and exogenous causes. For a natural scientist, however, the temporal approach would be the basic training for further exercises. One can now extend the models, for example by taking rates of change or differential calculus into account, ∂x/∂y (“Derivation of x in relation to y”). The model now works qua its temporal connections, relations and conjunctions, for which even the term equilibrium may be retained if it is not drastically reduced to the definition of a stage. Ayache describes the differential calculus as follows: “The differential is such that neither of the two entities (dy, dx) that are seemingly related by the differential are present in the differential. The differential is only the relation, not the actual entities. It is only the power of producing, or generating, the co-variation of the two mathematical entities when they come to be actualized. It is a place of repetition and retrieval (extraction) rather than a finished result. It is the place where the function (to be actualized) is determined, that is to say, differentiated, the place where it could have been otherwise yet it is faceted and cut to be this way, the place where the rift separating the variables and orienting their relative differences (in other words, their future co-variation) is first opened and the function is first shaped. (Ayache 2010a: 293-294)

So let us assume that a system/model contains two types of variables, exogenous and endogenous. Endogenous variables are those which the economist assumes to be intrinsically related to markets, such as prices, quantities, labour input, interest rates, wages, etc. (We refer to Freeman 2015 in the following presentation)

The state vector of all these variables at time t is: xt={x1t, x2t, …. xnt} (1)

Xt varies according to the problems being studied, the key question being how the system/model moves from one point to another. This concerns the problem of the temporal approximation, which is initially still considered as a discreteness here. As examples, the laws of gravitation, Newton’s law of motion, the laws of thermodynamics etc. may be considered. And in economics, one could cite Kalecki’s price equations, accelerator-multiplier systems, Harrod’s growth equations, Marx’s reproduction schemes or the non-linear cyclical models that Richard Godwin in general and Joseph Schumpeter and Paul Samuleson in particular have introduced into economics. (On the equilibrium conditions of Marx’s reproductive schemata including the matrix representation, cf. Willi Semmler 1977: 170f.)

The exogenous variables here roughly include all the rest, which cannot be subsumed under the endogenous variables. In a marginalist framework these are consumption preferences and production functions, in a physical economy or the framework of Sraffa they are the physical quantities of inputs and outputs. In a rational choice framework, they are the agents’ forecasts of the supply and demand of goods. In general, however, there are no limits to what can be included or excluded here.

The critical mathematical property of an exogenous – as opposed to an endogenous variable – is that, in the course of affirming the temporal continuity hypothesis, a value i at a certain point in time t is dependent on the value at another point in time.

The state vector of all these exogenous variables at time t is

at = {a1t,a2t, …. ant} (2)

You can now write a general dynamic equation into the system:

xt = f (at; xt-1) (3)4

This is a differential equation in which the status of the endogenous variable at time t is inevitably related to states at earlier times t -1. If now the time interval becomes infinitesimal, so that ma is dealing with continuous instead of discrete times, then the following differential equation can be written:

dx/dt= f(at,x) (4)

In a temporal system, the model serves to determine the average in the context of the observed variables, so that exoteric reality can be modelled as follows:

Xt = f (at; xt-1) (5)

X= X plus ɓ (6)

ɓ is considered here as a residual term that justifies the difference between observed values and the predicted average, i.e. it represents those factors that are not included in the model in question (all models are incomplete).

In the equilibrium system, on the other hand, the following modelling occurs:

xt* = f (at; xt*) (7) x= x* + &t* (8)

In this system it is assumed that x does not change, and this is exactly what its problem is. In equation (7), the time variable is identical on the left and right sides. If the endogenous variables are not allowed to change within a dynamic system, then the only possible reason for the change can only be an exogenous reason. There can be no movement of the system qua its endogenous properties, because the equations are solved under the condition that the endogenous variables do not change, which means nothing else than that the market is already perfect.

Esoteric and exoteric descriptions of economic systems can now be described mathematically. XT is an exoteric variable that is observable, which means that all elements of a temporal paradigm can be directly observed and measured. X*, on the other hand, is an esoteric ideal that is de jure not observable because it represents a stage that the system will never reach. For this discrete time system Marshall and Bortkiewicz used the term succession. Now equation (7) gives at least one solution. There is, of course, more than one solution, but this one solution is given to explain why the system can reach the desirable stage of perfection at all, although the question of why there can be no perfect stage in itself is not addressed. With the help of mathematics, a universal method is established in the static equilibrium theories, which allows to take over the role of a general metaphysics of economy, instead of being merely the property of a particular school of thought. Some theorems prove that under general conditions a solution to equations (2) and (3) is possible, which depends on the function f and the initial values x0 at the time t=0, which is usually discussed as limiting conditions. The equilibrium, once freed from its ideological traps, can now be conceived as a particular, restrictive solution to a more general temporal equation, i.e. as a hypothetically assumed stationary state in which x does not change over time. This is under a wide range of conditions set by a set of theorems, the most general of which is perhaps the Brouwer fixed-point theorem. (Harzheim 1978) X*t will, however, vary over time because of the changes in a given by equation 2, which is here understood as a structural change. In a more general temporal solution (5), xt has a complex trajectory which is dependent on the transfer function f ().

In contrast, the theory of the Competitive General Equilibrium (cf. Freeman 2015) demands in its elementary form that xt= x*t. This means that for a given equation (2) the solution of equation (1) is identical to the solution of equation (3). And this is not true. Therefore, CGE again assumes that xt can come sufficiently close to x*t; a solution that varies from economist to economist so that we can ignore the differences in the end. A further difficulty for the equilibrium theory arises when the solution xt represents a centre of gravity around which the real X oscillates in the manner of a pendulum. But this is a rhetorical fiction that can only describe a restrictive range of movements. To a certain extent, a final vanishing point for the equilibrium theory becomes possible here, whereby the temporal method remains hidden.

The neo-Ricardian literature (ibid.) obscures this point even further by resorting to special variants of the “Fixed-Point Theory” such as the “Perron-Frobenius Theorem” (Hupert 1990), which is supposed to make it possible to find a fixed-point solution when f takes the form of a linear system, whereby the coefficients of this linear system carry within them any justifiable properties. These characteristics indicate that the economic system is capable of reproducing itself, so that there is in fact a particular stage of the system in which it does not change because it itself provides the solutions that determine the values or, in the price system, the profit rate and production prices simultaneously and uniformly. (The theorem of Perron-Frobenius deals with the existence of a positive eigenvector to a positive eigenvalue with the greatest magnitude of non-negative matrices). This means that only specialists who have understood the linear matrix theory can understand why this theory works.5

Ultimately, with regard to the affirmation of a mathematical derivation of the stable market equilibrium, one must always set virtuality equal to actuality, i.e. eliminate the time factor completely, so that the processes of realising capital (actualisation of the virtual), which here are always those of equilibrium, take place simultaneously and immediately (under the conditions preceding them). The possibility would now be at once their execution.6

translated by deepl