Models in Economic Science

Economics is a regional knowledge that is constructed with the help of specific discourses and forms of knowledge – mathematics, statistics and epistemology. Knowledge is produced about an object (which supposedly exists independently of the discourse) and under certain conditions or ceteris paribus circumstances this knowledge can be specified as a mathematical model. In comparison, Laruelle’s concept of “economo-fiction” or “generic science” must be radically distinguished from economics – economo-fiction is in fact a new discipline that, among other things, aims for a non-Marxist interpretation and a transformation of the formalism of economics, including Marxist economics. Economo-fiction” contains an axiomatic and generic knowledge, i.e. a genuinely generic theoretical practice. Laruelle defines the generic as follows: “A type of sciences or knowledges [connaissances] sufficiently neutral and devoid of particularity in order to be added to others more determined and co-operate with them, transforming them without destroying them or denying their scientific nature. They are capable of being added to others acquired in a more ‘classical’ way without unsettling what the latter take from their domain of object and legality, i. e. capable of transforming knowledge without philosophically destroying it. “61 (Laruelle 2008)


The following should be noted from the outset: To distinguish economist fiction from economics solely in order to position it as a new regional knowledge is something we want to avoid as far as possible with Laruelle (otherwise economist fiction would be pure economics-as-science). On the other hand, to assume that there can definitely be no regional knowledge qua econo-fiction means to subsume it entirely under the aegis of philosophy or non-philosophy. Econo-fiction, on the other hand, aims at a new theoretical practice of performance, a new organon, i.e. it attempts to transform the regional knowledge of (Marxist) economics in a non-representational way through the use of quantum theory and fractal geometry, among other things, and this implies first of all destroying any common-sense ontology that is still present even in the Marxist critique of economics (such as the Marxist labor theory of value). Economo-fiction invents its own scientific models, objects and relations by means of a specific theoretical apparatus, in particular by cloning the Marxist material of economic critique. And if it uses mathematics, then certainly not in the manner of Badiou, who once again links the ultimate obligation of knowledge to the exercise of ontology with the use of set theory. The insight that models do not depict an (economic) world opens up a methodological debate that problematizes the integration of mathematical models into discourses (game theory), narratives and even into economo-fiction. With Laruelle, we also turn sharply against (economic) theories that call for a kind of transcendent numéraire or universal substance, as in the concept of utility in neoclassical theory.

Both theories establish a circle that makes every economic event or object stagnate in its peculiar self-referentiality or tautology: this commodity is worth X euros because it brings the buyer X amount of benefit or, alternatively, because labor time X is stored in the commodity, we know that the price of the commodity must be X euros.62


If economo-fiction takes on economic structures and objects, it is not to represent them by means of linguistic theories and discourses, but to transform them exclusively by cloning the theoretical material, which can have many sources, and this definitely requires new scientific patterns – hypotheses, deductions and experimental tests. (Cf. Laruelle 2014: 66) A generic economic theory should definitely be freed from its philosophical “vices”, but it should by no means align itself with empiricism, instead opening up a theorem-without-theory and models (math-without-math). (ibid.: 172) Theory does not synthesize empiricism, just as empiricism is not the measure of theory. The empirical confirmation of a hypothesis in macroeconomics is not that of a particular microeconomic function. In order to know this, a theoretical analysis is required, which, for example, shows the relationship between the conclusion and the variables influencing it, whereby it must be clarified whether and why certain variables are relevant at the macroeconomic level or not. .

The epistemological problem here is whether needs come in the last instance, i.e. solely in mutual reference to the real, which excludes any direct or imaginary synthesis or conflation of empiricism and theory. In Althusser’s terminology, talk of empirical-theoretical knowledge would be based on the imaginary fusion of real object and object of knowledge. Non-Marxist “theory”, on the other hand, insists on the immanent-practical way of recognizing facts, which are always problematic, and this means that we cannot analyse data without the addition of theory. This concerns not so much the interpretation of economic events, but rather their representation. Sometimes the concepts and categories seem to contradict empiricism, but insofar as they claim “truth”, they are
“truth”, must always be placed in relation to empiricism, which, however, does not function here as the sole measure or judicial authority for theory. Without the investigation of facts, there is no development of theory.

Eco-fiction turns to the given (which is already theoretical material) in order to clone it. This does not imply any reflection of economic reality, but requires the transcendental-performative description according to the real, whereby Laruelle uses the term
“adequate-without-correspondence”. Particularly with regard to Marxism, it is by no means a question of emptying it of its content or alternatively re-arranging it with new content in order to reassemble it according to current socio-economic realities. Rather, one should treat its materialist, economic and political contents as sim- ple symptoms that need to be reinterpreted and eventually transformed in order to transform thinking according to reality (capital) into thinking according to the real; whereby the real always escapes reality in one way or another. For economics, this means that it is not in an exchange relationship with the reality of capital, but also not with the real.


Every use of the term “economic world” as an a priori or as a mnemonic mapping is based on philosophical thinking; one could even say with Laruelle that the words “world” and “philosophy” are interchangeable here. For if the theoretical body of economics is a
“world”, then this implies that all possible economic discourses can be translated into philosophy, which then once again establishes itself as a meta-discourse above the regional knowledge of economics. In order to subvert this hierarchy, the following thesis should be put forward: There is no economic world! In this respect, eco-fiction would then be characterized as a non-knowledge rather than a knowledge (episteme/techne), to the extent that eco-fiction deconceptualizes the material of Marxist economics and the social sciences, among other things, whereby the possibly resulting new economic models do not represent the world, but rather subject it to a radical critique by first destroying the existing theories and models and replacing their claim to absolute sufficiency with contingency. Economics could then be understood as the science of non-conceptual social relations. No new economic “world” is created here, but rather, to paraphrase Roland Barthes, in a first step the linguistically expressible is made inexpressible.
Many of the problems that the humanities have with the natural sciences arise simply because we are used to thinking exclusively in terms of discourse or concepts. For example, the economic concept of homo oeconomicus is usually only criticized because of its insufficient consideration of social and political relations. However, it must be possible to pursue a radical critique of economics without the direct use of politics. Here we immediately come up against the problem of exogeneity. In order to create an economic model with all possible accuracy, it would in fact have to include the models of all other sciences. However, economic models only deal with very specific causalities such as prices, production factors, human agents, production relations, ecologies, etc. And they divide these factors into endogenous factors. And they divide these factors into endogenous variables, which are determined in the model, and exogenous variables, which are determined outside the model. However, exogeneity remains interesting here because it concerns all the different disparate orders (political, social, biological, etc.) of causality.

When an economist statistically calculates a liquidity premium, he can calculate it indirectly, i.e. in proportion to the variables of a model without having to assume a totalizing factor (proportioning without proportionality). At best, econometric methods that do not work dis- cursively attempt to determine the values of exogenous variables themselves using such models. They do not refer “correctly” to an object, but they process qua a pragmatic use of numerical properties within the framework of a quasi-tautolo- gic model in order to generate certain economic cases in a non-totalitarian way. Roughly speaking, econometrics works exclusively with numbers and not with words. Philosophy has no equivalent for this way of determining exogeneity and must therefore synthesize all possible causalities, i.e. philosophy only ever deals with exogeneities within the framework of its own decisions. And if certain phenomena of finance, such as exotic derivatives, cannot be translated into the language of philosophy, the philosopher considers them unrecognizable or relegates them to the “black box” of the financial system, perhaps in order to open a moralizing discourse about the bankers.
The model makes no distinction between the fact and the fact that something is a fact (facticity). If the model is defined with the use of certain conditions, it performs a quasi-tautological deduction of its endogenous relations, whereby differences and objects are continuously nullified or eliminated. Speaking of mathematical models, this means that the conceptual knowledge or episteme is degraded to a secondary knowledge, or in other words, the conceptual knowledge or episteme is a by-product of factuality. At best, there are still epistemological questions for a “philosophy of economics” when they are related to economics-as-science. In the case of econometric models, it must therefore be understood that numerical properties are mainly used when they generalize a case and thus by no means represent anything. In most cases, however, even in mathematical model theory, the correlationist framework of philosophy is not completely abandoned, in which a”correct” economic model is precisely the one whose internal relations somehow still refer to current economic objects. (Cf. Meillassoux 2008: 25)

Let us first take a rough look at the approach of model theory: With the help of a radically pragmatic, neutral use of language, axioms are first introduced – one starts with zero, so to speak, and then sets a series of conditions for variables that can be used in certain theoretical situations. In a sense, this language, when it concerns economics, must be convertible into mathematics or statistics, but first the “interpretation” of a set of economic phenomena is made and stripped of any contiguity in order to produce an elaborate tautology qua mathematical models, while at the same time external distinctions are transformed into exogenous parameters. In Laruellean terms, one could say that the critique of economics is characterized at this point at least by a constant non-learning, since conceptual knowledge is transformed into non-knowledge. It is deciphered rather than described.


Generic economists start with conceptual models that are based on discursive disciplines and reduce them to their non-conceptual “essence”. To use Nils Bohr’s analogy, economists try to elaborate how and through which scientific-technical means discourses have the capacity to produce meanings in order to relate these in turn to models of operativity. One takes conceptual statements from the ontology of a discipline (What is economics?) and shows in detail how these rightly or wrongly interfere with mathematically constructed models. Heisenberg wrote the following about this from the perspective of quantum physics: “What we observe (is) not nature itself […] but the nature that is exposed to our questions. (The) scientific work in physics consists in asking questions about nature in the language we possess and trying to obtain an answer through experiments that we carry out with the means at our disposal.” (Heisenberg 2006: 85) From a correlationist perspective, on the other hand, the classical economist constructs a model, a grid or a syntax in order to superimpose it on the world, and the truth then results from the correspondence between theory and world.

Let us assume that there is a multiplicity of disparate discourses that contain certain (theoretical) phenomena, problems and topics. An economist chooses a specific topic from them, tries to model the discourse and does so under the assumption that certain descriptions of the phenomenon eliminate contingency (related to data problems). An internally consistent modeling and interpretation that eliminates contingency must be found, whereby the model is considered correct. If it is “correct”, then this is the only possibility, whereby other possibilities or contingencies disappear. Exogenous variables remain the same or have the same function in relation to the model, although they can take on different values in different situations. Although the inclusion of a large amount of data in the model is desirable, it does not seem readily possible to apply the concept of experiment to economics, since the (scientific) experiment destroys the knowledge that previous experiments have produced about the system. Economics, on the other hand, must limit the contingency of each experimental event in order to gradually de-conceptualize it until it can be integrated into the relational contexts of a specific model. At the same time, economics-as-science works with the operative sampling of existing discourses and reduces them to useless metaphors. By incorporating the generic method, this science could be understood as a non-writing and a de-conceptualization with which contingency is not excluded, but at least limited. (The term eco-fiction describes ecology from the perspective of a non-correlationist theory. It refers to the fictionalist school of mathematics).
Let us pursue the problem of economic modeling further.

In his book The Global Minotaur, Yanis Varoufakis refers to an economic theorem that he helped to develop, which aims to prove that “solvable economic models cannot handle time and complexity at the same time”. (Varoufakis 2012: 169) In economic modeling, one either pays more attention to complexity (relationships between variables, coefficients and parameters) and usually develops static equilibrium models or one prefers time and opts for formalization of dynamics, in particular by using non-linear differential equations and designing non-static equilibrium models.63 Simultaneity of complexity and time now means designing and discarding models that depict real-time price movements and transactions and thus allowing the inclusion of crises in the models, which cannot succeed if models transform the economy, which in its processuality escapes them again at the same moment. Economic models use the modern axiom concept of mathematics, whereby the axiom is set “arbitrarily” (on the condition of ceteris paribus assumptions – if-then) and attention is paid purely to logical consistency when applying it.64

For an operationalization of the relations of total capital, this means that non-linear dynamics of highly complex systems must be con- structed, which at the same time “correspond” to the competition between the individual capitals.
“The models must take into account both the diverse feedbacks in the dynamic time courses and the simultaneous interactions between different individual capitals (adequate-without-correspondence). The distinction between useful distribution functions whose dynamics are modeled with stochastic equations (macro level) and the calculation of random fluctuations at the micro level.

Philip Mirowski has shown that the model of dynamic stochastic general equilibrium (DSGE) is a very special case in which a one-person economy corresponds to the overall economic system and therefore a reconciliation of micro- and macroeconomics can occur. (Mirowski 2015: Section 5; Kindle edition) The very existence of heterogeneous individual capitals does not allow for this type of generalization, which supposedly leads to equilibrium at the macroeconomic level (stage); rather, the always fragile balance is achieved within the framework of turbulent and cyclical regulation through periodically occurring boom and depression phases that circulate around centers of gravity. Therefore, in order to control the non-linear processes in their temporal turbulence, new axioms must constantly be added or old ones subtracted. But how can we deal with undecidable propositions or the forces of infinite quantities that cannot be grasped axiomatically? How does axiomatics, which is based on countable models, relate to a continuum or to leaps? And finally, how do financial mathematical axiomatics and the corresponding coherent risk measures relate to currents in mathematics (intuitionism) that are adequate to deterritorialized money and escape the coherence of the calculus? In the best-case scenario, the measure must then be transferred to becoming itself, or to put it another way, a simulative model generated using the extremely high computing power of a supercomputer would have to have the capacity to depict and/or measure monetary capital flows and their rhythms and monetary transactions in real time (today, at least, we are moving away from the view of models as mere depictions of economic reality). One possibility is to subject the models themselves to stress tests that test their robustness for extremely fluctuating scenarios, including the presence of crises, jumps and breaks (the worst case, so to speak, of a model dancing out of line deserves special attention) in order to find effective solution procedures or calculate probabilities on the basis of the collection and evaluation of gigantic amounts of data. (ibid.) Even improbabilities, think of Taleb’s black swan, should still be taken into account. The algorithms always remain on the hunt for what lies outside their territory and its probabilities; they are virtually defined by what they are not yet and what they may never become.

Let us go one step further. It is considered a valid tenor of heterodox economics that the general economic equilibrium theory neglects the question of dynamics and crisis-like growth and that even its extended methodology has not led to the hoped-for proof of the stability of equilibria. The Sonnenschein-Debreu-Mantel theorem (the models of general equilibrium theory always remain at their general level) points to this in particular. In economics, the various equilibrium theories have often been presented as models of the actions of rational agents. However, this should be treated with extreme caution! It may be that these models include far too few factors, so that stable economic equilibria cannot necessarily be derived from them. Furthermore, the assumption of the rationality of economic agents is far too grandiose, considering the results of “behavioral economics”, for example, according to which factors such as herd behavior, imitation and rules of thumb tend to co-determine the actions of agents. More recent concepts such as “econophysics” therefore modulate the theory of the rational agent more finely, introduce solutions for diametrical interactive conflicts in the course of applying game theory and thus attempt to problematize the identity of the agent and its egoism in view of the many roles it has to take on, without, however, completely abandoning the subjectivist or correlationist thesis. However, without being able to specify more precise criteria for economic stability, the concept of equilibrium remains a pure model construct. For this reason, the question of the conditions under which the interactions of the agents can lead to an equilibrium – whether Smith’s “gravitating” actually takes place – is further explored. In this way, the models become more and more dynamic.

The difficulty of even being able to solve the problem of stability within the framework of models and their ceteris paribus assumptions means that questions about the real existence and uniqueness of equilibria arise again. At this point, the position formulated by Marx and even by F.A. Hayek that macroeconomic flow dynamics are processes that take place behind the backs of the agents can no longer be dismissed.


A whole series of other objections can be raised against certain types and methods of modeling. For example, the formal structure of a model (the relations between the elements) may be too differentiated to be somehow related to “real economic variables” (the model then in fact remains purely tautological); on the other hand, there may also be an internal simplification of the model, so that the assumptions made with regard to model formation become increasingly important. Crude mapping theories, according to which models map economic reality or refer to its phenomena, will hardly solve the problem. If, on the other hand, the performative act of the model is overemphasized, the model must be able to prove its place of use and its operational success. What is at stake is not only the internal, non-discursive logic of the model itself, but also its relation to a reality that cannot itself be grasped substantially or phenomenologically if one wants to escape the convertibility or reciprocity between theory/model and reality. Reality could therefore only be defined as reality with reservations; it would at least have to be supplemented with the concept of virtuality. (Cf. Strauß 2013: 180) Reality is then neither the object and/or result of a conceptual-syntactic discursivity nor that of a non-discursive model. Its “determination” as real virtuality is more about what escapes theory, mathematics and the model, and ultimately any representational objectivity

Let’s take a closer look at the various equilibrium theories and theirmodels a little more closely: From the perspective of physics, the following can be said about statistical model theory: if a lake changes its water level according to its input and output of water, then equilibrium theory studies the water level of the lake at a certain and then again at another point in time. While this may still work for determining the water levels of lakes, what this theory does not explain at all are the dynamic movements of waterfalls and storms. Confronted with curved water or waves, the equilibrium theorist has no suitable mathematical instrument to explain these movements.

Most economic schools include an equilibrium version in their discourses (IS-LM Keynesianism, Sweezy-Morishima-Steedman, Equilibrium Marxism, Marshall’s Competitive General Equilibrium; cf. Free- man 2015/Sweezy 1971) and a non-equilibrium version or a temporal variant (Keynes and Kalecki’s temporalism, temporalist interpretations of Marx’s theory of capital accumulation such as that of Andrew Kliman, Austrian marginalism, etc.). But for the most part, equilibrium theory remains the dominant paradigm. The (static) equilibrium theory is a normative standard that eliminates dynamic movement – and this is attempted to be shown with simultaneous equations that imply that all significant variables of the model are the same at the end of a given period as at the beginning. From all these equilibrium theories, it is hardly possible to prove crisis-like phenomena, which would actually have to be introduced here as endogenous features. It would then have to be shown that exact equilibrium is a fleeting or non-stationary phenomenon, because every given variable constantly exceeds or falls below the center of gravity. (Cf. Shaikh 2016) Equilibrium is therefore not to be understood as a stage, but as a turbulent and cyclical process characterized by repeated fluctuations of different durations and amplitudes. In contrast, neoclassical theory must assume external causes outside the system for crisis phenomena, such as incorrect monetary policy, oil shocks, incompetent government policies, irrational behavior of economic agents, etc. If the economic system a priori tends towards perfect reproduction, then serious deviations from the determined equilibrium cannot logically occur. And if deviations do occur, then politics must regulate the markets – which are by no means always perfect – in such a way that they are ultimately in harmony with the ideal of equilibrium.

The variables in the model are usually divided into two groups: endogenous variables such as prices and quantities and exogenous variables such as politics, culture or psychology. The decisive factor here is that a distinction is made between endogenous and exogenous causes in the course of economic equilibrium theory. For a natural scientist, on the other hand, the temporal approach would be the basic training for further exercises. The models can now be extended by taking into account rates of change or differential calculus, ∂x/∂y (“derivative of x with respect to y”). The model now functions qua its temporal connections, relations and conjunctions, for which the term equilibrium can possibly even be retained if it is not drastically reduced to the definition of a stage. Ayache describes the differential calculus as follows:

“The differential is such that neither of the two entities (dy, dx) that are see- mingly related by the differential are present in the differential. The differential is only the relation, not the actual entities. It is only the power of producing, or generating, the co-variation of the two mathematical entities when they come to be actualized. It is a place of repetition and retrieval (extraction) rather than a finished result. It is the place where the function (to be actualized) is deter- mined, that is to say, differentiated, the place where it could have been other- wise yet it is faceted and cut to be this way, the place where the rift separa- ting the variables and orienting their relative differences (in other words, their future co-variation) is first opened and the function is first shaped.” (Ayache 2010a: 293-294)
Let us therefore assume that a system/model contains two types of variables, exogenous and endogenous. Endogenous variables are those that the economist assumes are intrinsic to the markets, such as prices, quantities, labor input, interest rates, wages, etc. (We refer to Freeman 2015 for the following illustration)

The state vector of all these variables at time t is: xt={x1t, x2t, …. xnt} (1) Xt varies according to the problems being studied, the key issue being how the system/model moves from one point to the next. This relates to the problem of temporal approximation, which is initially understood here as discrete. Freeman cites the laws of gravity and Newton’s law of motion as examples, the laws of thermodynamics, etc. In economics, these are Kalecci’s price equations, accelerator-multiplier systems, Harrod’s growth equations, Marx’s reproduction schemes or the non-linear cyclical models introduced into economics by Richard Godwin in general and Joseph Schumpeter and Paul Samuleson in particular. (On the equilibrium conditions of Marx’s reproduction schemes, including the matrix representation, see Willi Semmler 1977: 170f.).

Roughly speaking, the exogenous variables here include all the rest that cannot be subsumed under the endogenous variables. In a marginalist framework, these are consumption preferences and production functions; in a physical economy or Sraffa’s framework, these are the physical quantities of inputs and outputs. In a rational choice framework, it is the agents’ forecasts regarding the supply and demand for goods. In general, however, there are no limits to what can be included or excluded here.
The critical mathematical property of an exogenous variable – in contrast to an endogenous variable – is that in the course of affirming the temporal continuity hypothesis, a value i at a certain point in time t is dependent on the value at another point in time. (Freeman 2015)

The state vector of all these exogenous variables at time t is: at = {a1t,a2t, …. ant} (2)
A general dynamic equation can now be written into the system:
xt = f (at; xt-1) (3)65
This is a differential equation in which the state of the endogenous variables at time t is inevitably related to states at the earlier time t -1. If the time interval now becomes infinitesimal, so that we are dealing with continuous instead of discrete times, then the following differential equation can be written:
dx/dt= f(at,x) (4)65 Xt can contain differences or derivatives of its other components, for example xt= (pt, pt-1) or (ptʹ, pt-1ʹ). This makes it possible to express dynamic relations that are greater than 1.

In a temporal system, the model is used to determine the average in the context of the observed variables, so that the exoteric reality can be modeled as follows:
Xt = f (at; xt-1) (5)
X= X plus ɓ
ɓ is regarded here as a residual term that accounts for the difference between observed values and the predicted average, i.e. represents those factors that are not included in the model in question (all models are incomplete).
In the equilibrium system, on the other hand, the following modeling occurs: xt* = f (at; xt) (6) x= x + &t* (7)
In this system, it is assumed that x does not change, and this is precisely the problem. In equation (6), the time variable on the left and right-hand sides is identical. If the endogenous variables are not allowed to change within a dynamic system, then the only possible reason for the change can only be an exogenous reason. There can be no movement of the system qua its endogenous properties, because the equations are solved under the assumption that the endogenous variables do not change, which means nothing other than that the market is already perfect. (Ibid.)

Esoteric and exoteric descriptions of economic systems can now be described mathematically. XT is an exoteric variable that is observable, which means that all elements of a temporal parameter can be directly observed and measured. X, on the other hand, is an esoteric ideal that is de jure unobservable because it represents a stage that the system never reaches. Marshall and Bortkiewicz used the term succession for this discrete time system. Now equation (6) gives at least one solution. There is, of course, more than one solution, but this one solution is given to explain why the system can reach the desirable stage of perfection at all, but the question of why there cannot be a perfect stage per se is not addressed. With the help of mathematics, static equilibrium theories provide a universal method established in static equilibrium theories, which allows them to take on the role of a general metaphysics of economics instead of merely being identified as the property of a particular school of thought. Some theorems prove that under general conditions a solution to equations (2) and (3) is possible, depending on the function f and the initial values x0 at time t=0, which is usually thematized as limiting conditions. The equilibrium, when freed from its ideological traps, can now be conceived as a particular, a restrictive solution of a more general temporal equation, i.e. as a hypothetically assumed stationary state in which x does not change over time. This under a wide range of conditions set by a set of theorems, the most general of which is easily the “Brouwer fixed-point theorem”. (Harzheim 1978) However, Xt will vary over time because of the changes in a given by equation 2, which is again understood here as a structural change. In a more general temporal solution (5), xt has a complex trajectory that depends on the transfer function f ().

The theory of Competitive General Equilibrium (cf. Freeman 2015), on the other hand, requires in its elementary form that xt= xt. This means that for a given equation (2), the solution of equation (1) is identical to the solution of equation (3). And this is not true. Therefore, CGE again assumes that xt can come sufficiently close to xt; a solution that varies from economist to economist, so that we can ignore the differences in the end. A further difficulty arises for equilibrium theory if the solution xt represents a center of gravity around which the real X oscillates in the manner of a pendulum. However, this is a rhetorical fiction that can only describe a restrictive range of movements. In a sense, a final point of escape for the theory of equilibrium is possible here, whereby the temporal method continues to be ignored.

The neo-Ricardian literature (ibid.) obfuscates this point even further by referring to special variants of “fixed-point theory” such as the
“Perron-Frobenius theorem” (Hupert 1990), which allows for to find a fixed-point solution when f takes the form of a linear system, whereby the coefficients of this linear system have some justifiable properties. These properties indicate that the economic system is capable of reproducing itself, so that there is in fact a particular stage of the system in which it does not change, because it itself provides the solutions that determine the values or, in the price system, the rate of profit and the prices of production simultaneously and uniformly. (The Perron-Frobenius theorem deals with the existence of a positive eigenvector for a positive, largest eigenvalue of non-negative matrices). This means that only specialists who have understood linear matrix theory will recognize why this theory works.66

Ultimately, with regard to the affirmation of a mathematical derivation of stable market equilibrium, virtuality must always be equated with actuality, i.e. the time factor must be completely eliminated, so that the realization processes of capital (actualization of the virtual), which here are always those of equilibrium, take place simultaneously and immediately (under the conditions that precede them). The possibility would now be their execution at once.

62 The neoclassical theory of utility is purely subjectivist from the outset; it refers to the supposedly autonomous needs of individual consumers, which, however, have long since been designed by a variety of bodies and forces (marketing, advertising, opinion industry, etc.). Today, people are no longer consumers, they are consumed, as their dividual and affective relations are consumed as data and signs by thousands of economic machines that carry out the necessary mapping and tracking every day using algorithms. These machine systems “know” the most intimate desires and needs of consumers, their affective relations and their dividuality, and with this recognition, the system of needs itself becomes the result of a production that still consumes the consumer. The epistemological problem with this topic is whether the needs actually originate from the desires of the subjects or whether the needs constructed qua media and consumption rather control the subjects.

,
63 The possibility of economic equilibrium was answered in the 1950s by Gérard Debreu using the categories of game theory. The basic ideas of neoclassical market theory were largely adopted. According to this, the market is a) a mechanical machine coordinated by the fundamental law of supply and demand; b) neoclassical theory itself provides its own organigram; c) it is therefore open to construction; d) it can be calculated or simulated by computer operations and is therefore also regarded as a thermostatic regulator.
64 The question is how the probability distribution changes under assumed conditions over a certain period of time. (Master equation, cf. Mainzer 2014: 218) Computer simulations can be used to represent the changing flows of two populations as different attractors, analogous to flow dynamics. Today, we are dealing with the increased complexity of data and the increased output of data over time. If we assume that even the previously known natural constants (Einstein’s speed of light c, Planck’s constant of action h, the gravitational constant G, etc.) are not eternally valid quantities, i.e. they can change in the course of evolution, then this is even more true of the time courses and scales of the economy.

64 The possibility of economic equilibrium was answered in the 1950s by Gérard Debreu using the categories of game theory. The basic ideas of neoclassical market theory were largely adopted. According to this, the market is a) a mechanical machine coordinated by the fundamental law of supply and demand; b) neoclassical theory itself provides its own organigram; c) it is therefore open to construction; d) it can be calculated or simulated by computer operations and is therefore also considered a thermostatic regulator.

The question is how the probability distribution changes under assumed conditions over a certain period of time. (Master equation, see Mainzer 2014: 218) Computer simulations can be used to represent the changing flows of two populations as different attractors, analogous to flow dynamics. Today, we are dealing with the increased complexity of data and the increased output of data over time. If we assume that even the previously known natural constants (Einstein’s speed of light c, Planck’s quantum of action h, the gravitational constant G, etc.) are not eternally valid quantities, i.e. they can change in the course of evolution, then this is even more true of the time courses and scales of the economy.

65 Xt can contain differences or derivatives of its other components, for example xt= (pt, pt-1) or (ptʹ, pt-1ʹ). This makes it possible to express dynamic relations that are greater than 1.

66 ON Marx-Sraffa P vgl. Michael Gaul 2015.

67 Even if it cannot be assumed that financial capital creates money out of nothing, it can at least be assumed that creditors are issuing ever higher loans, bonds, derivatives etc. to debtors today than will be available in the future as collateral, productivity growth and profit realization opportunities. In his book The Terrible Children of the Modern Age, Sloterdijk subsumes this economic phenomenon under the supposedly universally valid main theorem of civilizational dynamics, which states that the sum of the release of energies per se exceeds the binding capacity of civilizational forces. (Sloterdijk 2014: 85ff.) If this is the case, the question arises as to why the problem of the differential temporalization of monetary capital flows is usually ignored in economic theory: In order to continue to propagate the stability of capital and ignore its immanent susceptibility to crises. Although one suspects that the possibility of economic crises cannot be eliminated by applying a mathematically founded axiomatics that is based purely on functional elements and relations (and can therefore be applied to the most diverse areas), the equilibrium-oriented axiomatics should nevertheless continue to affirmatively generate certain realization models of capitalization.

translated by deepl.

Nach oben scrollen