Saturday, 29 June 2024

Turing machines vs computation

« Dependent type theories that are total and have a computable normalization procedure are Turing-incomplete. This is a theorem, not an opinion. Now, what the implications are for practical usability of dependent type theories that are total, that is an entirely different matter. » [1] (slightly adapted: see the entire discussion for background)

Conversely, I would argue that Turing Machines are not an adequate model of computation: there just is no such thing as a machine that does not halt or crash or get switched off, eventually. And, eventually rather goes with interactively.

In fact, more generally, the name of that "misunderstanding" is yet again the problem of infinity in standard mathematics and mathematical logic.

[1] SE, Proof Assistants, "Possible root cause(s) of
the misunderstanding that DTT implies not Turing complete?"
https://proofassistants.stackexchange.com/q/4002/3006#comment7782_4002

Monday, 20 May 2024

A Solution to Epimenides' Paradox

« 'Morning everybody,
"Epimenides lies iff everybody lies!" which
I am taking to be the predicative version of
the uncooperative rational player. »

Presentation and formal development (in Coq):
https://gist.github.com/jp-diegidio/f9a18bb0a8c87d585b1d56458cd082f6

[1] Wikipedia, "Epimenides paradox"
https://en.wikipedia.org/wiki/Epimenides_paradox
[2] Wikipedia, "Non-cooperative game theory"
https://en.wikipedia.org/wiki/Non-cooperative_game_theory

Sunday, 21 January 2024

On the logic of "it"

On the logic of "it" - Sparse (hopefully-)not-so-metaphysical thoughts

The idea of a Characteristica Universalis, i.e. an encyclopedic compendium of acquired knowledge in terms of a universal formal language that is potentially able to express all understanding, becomes in our times the inverse idea that language encapsulates all there is to understanding (whence in particular the idea/hope that so called Large Language Models are "a solution to problems").

But the simplest formalization exercise indicates that between language and understanding (or, language and "reality") there is an essential gap that only imagination can fill.

What does this expression mean: `fun x y => x + y` ?
What does this expression mean: `a^${h} <>~ T` ?
What does this expression mean: `2` ?

Sure, we might explain each of those, but, as long as the explanation is itself formal, we'd still need to find the bottom of it (what does "it" mean?).  Eventually, the problem of meaning remains at the level of our formal primitives.

And that is properly our symbolic level, the one that simply "cannot and will not be explained".  I.e. not any more than its own shape is its own diagram (or that "(its) nature is its own explanation"): ultimately, the idea/hope being, we cannot explain it but we can understand it to the point of naming it.

Incidentally, a principle is "let code speak for itself", which entails that our formal language should be as readable as natural language, just without the ambiguities, as the formal aspect exactly takes care that code cannot lie about itself (about its assumptions as well as derivations).

That much for a concrete convergence of formal and symbolic logic. Indeed, I'd propose Leibniz's formal-encyclopedic project is by now feasible, with applications potentially up to the legal context.

However, and precisely for concreteness, I think we cannot get away with it unless under a (trustless, surely) contract, where "it" takes the form of a (collaborative, hopefully) game-theoretic protocol, for "the acquisition, revision, consolidation and exercise of mutually informed and formally verified knowledge".