HoTT: pseudoproblems, extensional propositions and intensional constructions

Published by Graham Vunderink Gallery – February 1, 2023

Why should one subscribe to the idiocy of the anthropologist René Girard over the logician Jean-Yves Girard? Subscribing to the “mimetic theory of desire” is a base perversion of what is intelligible within philosophy and leads the techno-dystopian Peter Thiel to confess that people only desire something insofar as other people desire it. “Man is the creature who does not know what to desire, and he turns to others in order to make up his mind. We desire what others desire because we imitate their desires.” (Girard) Why naturalize such a faulty theory of humanity? There seems to be a spectrum between those who prefer to subscribe to ideology and participate in society with said ideology (“drink the juice”) versus those who categorically dismiss these statements as “metaphysical pseudo-statements” (Carnap) and recognize the domain of what is intelligible is limited to what is logically provable and then also whatever is empirically verifiable (hence, the school of “logical empiricism”). Carnap and Wittgenstein both began their careers as philosophers in the field of philosophy of mathematics. The logician Jean-Yves Girard invented System F-omega which would form the basis for type-theoretic languages such as Haskell.

The history of mathematics has grappled with the distinction between analytic constructions and synthetic constructions. Analytic constructions can be expressed in the manner that Descartes formulated algebraic geometry using the Cartesian plane with ordered pairs. Take for example the graph of y=x. Synthetic constructions have been introduced as far back as Euclid with the use of triangles-in-themselves for example. Hilbert attempted a school of mathematics founded on the axioms of points, lines, spaces, which proved to be inconsistent per Goedel. Even the attempts at analytic constructions from the anonymous group of mathematicians Bourbaki (directed by the de facto leader André Weil) through the theory of sets ultimately did not account for category theory save for Grothendieck (who did not officially publish as Bourbaki). The attempt to unify mathematics in a universal program of “mother structures” failed precisely because of its excessive pedantry with analytic constructions.

Intuitionistic intensional type theory as per Per Martin-Löf developed a generalization of Russell’s theory of types, but ultimately in the other direction toward synthesis with the invention of the type of all types. Proofs are only relevant in as far as what they construct. “Types classify constructions.” (Robert Harper) The methods of introduction, elimination and the ‘inversion principle’/’conservation of proof’ form the type-theoretic proofs or in other words, the constructions. “Type theory is an analysis and codification of Brouwer’s intuitionism drawing on Gentzen’s proof theory” (Robert Harper) Hilbert (with his universalist program of Formalism) famously attacked Brouwer’s school of intuitionism as heretical, but ironically intuitionism today is the only hope for a Mathesis Universalis via the synthetic constructions.

With the development of Homotopy Type Theory there was an inherently synthetic approach to foundations of math which did not affirm the Law of Excluded Middle globally (the Axiomatic

Freedom of Constructive Math contra Hilbert) and therefore had the generality type-theoretically to instantiate topological paths as types themselves.

Now for a bit of history:

Russell’s theory of types was a response to his discovery of what is now called Russell’s paradox after writing the monumental Principia Mathematica with Whitehead:

Of course, Russell too was concerned about the consequences of the contradiction. Upon learning that Frege agreed with him about the significance of the result, he immediately began writing an appendix for his own soon-to-be-released Principles of Mathematics. Entitled “Appendix B: The Doctrine of Types,” the appendix represents Russell’s first attempt at providing a principled method for avoiding what soon was to become known as “Russell’s paradox.”

Russell’s foundational theory of mathematics is historically descended from a philosophy of logicism due to Frege. Russell attempted to solve the issues of Russell’s paradox with predicative statements and adopting the axiom of reducibility where impredicative statements could essentially be reduced to predicative statements. This would later lead Wittgenstein to criticize Russell’s error of the axiom of reducibility which would imply implicit meanings behind propositional signs. Many felt the need for adopting such an axiom of reducibility was rather ad hoc and an afterthought. Wittgenstein in the Tractatus writes:

3.33 In logical syntax the meaning of a sign ought never to play a role; it must admit of being established without mention being thereby made of the meaning of a sign; it ought to presuppose only the description of the expressions.

  • From this observation we get a further view – into Russell’s Theory of Types. Russell’s error is shown by the fact that in drawing up his symbolic rules he has to speak of the meaning of the signs.
  • No proposition can say anything about itself, because the proposition sign cannot be contained in itself (that is the “whole theory of types”).
  • A function cannot be its own argument, because the functional sign already contains the prototype of its own argument and it cannot contain itself   Herewith Russell’s paradox

vanishes.

6.123 It is clear that the laws of logic cannot themselves obey further logical laws. (There is not, as Russell supposed, for every “type” a special law of contradiction; but one is sufficient, since it is not applied to itself.)

6.1231 The mark of logical propositions is not their general validity. To be general is only to be accidentally valid for all things. An ungeneralised proposition can be tautologous just as well as a generalised one.

6.1232 Logical general validity, we could call essential as opposed to accidental general validity, e.g., of the proposition “all men are mortal”. Propositions like Russell’s “axiom of reducibility” are not logical propositions, and this explains our feeling that, if true, they can only be true by a happy chance.

6.1233 We can imagine a world in which the axiom of reducibility is not valid. But it is clear that logic has nothing to do with the question of whether our world is really of this kind or not.

Perhaps what is wrong with the history of the analytic tradition of mathematics is the overreliance on extensionality over intensionality.

Frege gave the outline of a theory of intensionality, but no intensional logic in any formal sense. There have been attempts to fill in his outline. Alonzo Church (1951) went at it quite directly. In this paper there is a formal logic in which terms have both senses and denotations. These are simply taken to be different sorts, and minimal requirements are placed on them. Nonetheless the logic is quite complex. The formal logic that Frege had created for his work on the foundations of mathematics was type free. Russell showed his famous paradox applied to Frege’s system, so it was inconsistent. As a way out of this problem, Russell developed the type theory that was embodied in Principia Mathematica. Church had given an elegant and precise formulation of the simple theory of types (Church 1940), and that was incorporated into his work on intensionality, which is one of the reasons for its formal complexity.

Russell was not satisfied with the second edition [of Principia Mathematica], and later he endorsed an extensional view in which logic need not account for intensional phenomena. Although he disagreed with Ramsey on certain points—most notably identity—he accepted Ramsey’s criticism that the stratifications of Russell’s theory of types were unnecessarily severe and that simple type theory suffices to prevent the paradoxes. By replacing ramified type theory with simple type theory, neither the axiom of reducibility nor an alternative axiom is necessary to allow quantification over functions satisfied by some object. Since this view does not account for intensional functions, it is less general than Russell’s initial goal; however, it does give him the desired result of reducing mathematics to a logical system.

Homotopy Type Theory is not conceivable in the extensional framework of type theory. One need only note that the Axiom K and the Univalence Axiom (the main axiom behind HoTT) are incompatible:

In type theory, the axiom K is an axiom that when added to intensional type theory turns it into extensional type theory — or more precisely, what is called here “propositionally extensional type theory”. In the language of homotopy type theory, this means that all types are h-sets, accordingly axiom K is incompatible with the univalence axiom.

This is all to say that intuitionistic type theory dismisses the use of impredicative statements (like the logicist Russell), but generalizes the theory to include a “type of all types”. “Martin-Löf’s introduction of a type of all types comes from the identification of the concept of propositions and types, suggested by the work of Curry and Howard.” Intuitionistic logic can be summarized by what Robert Harper calls ‘Logic as if people matter’. Through ‘propositions-as-types’ we arrive at the heart of

intuitionism, but moreover, constructive mathematics via the invocation of a witness to any proof demonstrated all as a type.

Martin-Löf’s introduction of a type of all types comes from the identification of the concept of propositions and types, suggested by the work of Curry and Howard. It is worth recalling here his three motivating points:

  1. Russell’s definition of types as ranges of significance of propositional functions
  2. the fact that one needs to quantify over all propositions (impredicativity of simple type theory)
  3. identification of proposition and types

Given (1) and (2) we should have a type of propositions (as in simple type theory), and given (3) this should also be the type of all types. Girard’s paradox shows that one cannot have (1),(2) and

(3) simultaneously. Martin-Löf’s choice was to take away (2), restricting type theory to be predicative (and, indeed, the notion of universe appeared first in type theory as a predicative version of the type of all types). The alternative choice of taking away (3) is discussed in Coquand 1986.

Our hopes of a universal foundation for mathematics with inherently synthetic constructions was only conceivable until recently through the work of Martin-Löf (Intuitionistic Type Theory) via its intensional characterization, Voevodsky (the Univalence Axiom), and Steve Awodey via the interpretation of the isomorphism of identity types as topological path space objects, which together can provide the power for the constructions of Homotopy Type Theory (HoTT).

Response by Thorsten Altenkirch:

“I don’t think the analytic vs synthetic distinction is so important here. In the end logical principles are a reflection of our natural (evolved) ability to reason. And obviously in this sense Mathematics is constructed by us. The question of extensionality in my view is something different: because mathematical objects are constructions of our they are given by all what we can say about them. This is Leibniz’ principle. So by a function I mean a black box where I can put something in and I get something out. I don’t know anything about the mechanism (it may involve magic). Hence any two functions which produce the same output for all inputs are the same, they have the same properties hence they are the same mathematical object. The same with types, all what I can observe about two types, let’s say the binary and the unary natural numbers is limited by their interface. Since I cannot distinguish binary und unary numbers they are the same, if I want to distinguish them I need to introduce a property that distinguishes them (maybe I can observe the size of their representation, not very natural). This is the source of the univalence principle.”

Nach oben scrollen