!!From the Bulletin of EATCS No. 98
!A Dialogue about Computations and Natural Sciences with Professor Giuseppe Longo, [http://www.di.ens.fr/users/longo] \\
by __Cris Calude__\\
\\
''Giuseppe Longo, a well-known mathematician and computer scientist, is a professor at the Ècole Normale Supérieure in Paris and Directeur de Recherche at CNRS. Professor Longo has extensively published in many areas including logic and theory ofcomputation, type the0lfV, category theory and their applications to computer science, interfaces between mathematics, physics, biology, philosophy of mathematics and cognitive sciences.
\\ \\
He is the editor of the book series "Visions des sciences", Hermann, Paris, and is serving as an editor to the following academic journals: "Mathernatical Structures in Computer Science " (editor-in-chief). "Information and Computation", "Theoretical Informatics and Applications", "J.UCS", "La Nuova Critica ", "The European Review", "Journal of Mind Theory". Professor Longo has supervised 33 (research-oriented) Master Theses and I5 PhD Theses. He has been an invited lecturer at 30 international conferences and he gave more than 150 seminar talks in universities or research institutions in Europe, USA and Asia. Professor Longo is a member of the Academia Europaea since 1992.''
\\ \\
__Cristian Calude__: Your academic career took you from Pisa (where you graduated "Laurea" (cum laude) in Mathematics and then spent 15 years as an academic) to US (UC Berkeley, MIT and Camegie Mellon for three years), and then to ENS in Paris (since 1939). How enriching have these moves been?
\\ \\
__Giuseppe Longo__: Learning from others, the exchange with others is crucial, in scientific work. Very few researchers can do relevant work without interaction. I learned a lot from these very enriching contexts, beginning with the extraordinary milieu of mathematics and informatics in Pisa, in the ’70s and ’80s, and the subsequent American experience. Some collaborations, in particular at MIT and Carnegie Mellon with many, but also in Britain (R. Hindley) and Holland (H. Barendregt) were fundamental for me. And then the complex network of interactions with colleagues of three disciplines I am enjoying in Paris, in particular with the physicists F. Bailly and T. Paul. The main lesson I try to give to my students is that "two interacting brains think and produce much more than the double". But collaborating is very hard: good researchers are very careful in choosing collaborators and the exchange itself is difficult.
\\ \\
There is a growing fashion instead in the use of words referring to "competition" in research. But this is not how science goes: the difficult and productive side is collaborating, exchanging, learning from others and ... to go further on, together. If, time to time, one has to compete for finite resources or for a position, this is part of the game, not the purpose nor the joy of science.
\\ \\
Stressing competition of teams and individuals is a real disaster for scientific research. And it is largely borrowed from the current cultural hegemony of the financial markets, where traders are in continual competition and they compete on a mostly empty economic/productive content. As a further imitation, many institutions entrust "Independent Evaluation Agencies" (like those that evaluated with the highest score Enron in 2001 and Lehman Brothers in 2008, until the "day before" . . .) for judging scientific work. And self-appointed, "science independent" agents provide automatic indexes for classifying researchers. I proposed an "Editors` Note: Bibliometrics and the Curators of Orthodoxy" (downloadable from my web page) for ''Mathematical Structures in Computer Science'', the CUPjoumal I direct, on this theme. It was approved by all the 34 members of the board, from ll different countries. Yes, in contrast to competition, collaboration and exchange are fundamental and moving enhances them greatly.
\\ \\
__CC__: Moving is a joy and pain. How hard was for your family, especially, your daughter?
\\ \\
__GL__: Hard, but stimulating. My wife started a new carrier, first by a master in Pittsburgh, when I was teaching at CMU, then a PhD in Paris, finally a university position in France, but this was tough on her. My daughter moved between the age of 5 and 8 between three very different school systems (USA, Italy, France), not easy for a child. Now she is trilingual, though, and she can . . . "adjust" to almost any life context. And she is doing a beautiful thesis work on Italian Quattrocento, in Paris, with very frequent trips to Italy.
\\ \\
CC: We share two main interests: incompleteness and randomness, Let’s talk about incompleteness first.
\\ \\
__GL__: OK, but I prefer to tackle the issue by relating it to some Mathematics of Physics, in a preliminary way. In a short note of 2001, I suggested that Poincaré's three-body theorem is an epistemological predecessor of Godel’s undecidability result, in particular because Hilbert’s completeness conjecture is a meta-mathematical revival of Laplace idea of the predictability of formally (equationally) determined systems. For Laplace, once the equations are given, you can completely derive the future states of affairs (with some, preserved, approxima- tion). Or, more precisely, in “‘Le systeme du monde", he claims that the mathematical mechanics of moving particles, one by one, two by two, three by three ...compositionally and ''completely'' "covers”’ or makes understandable the entire Universe. And, as for celestial bodies, by this progressive mathematical integration, "We should be able to deduce all facts of astronomy", says he.
\\ \\
The challenge, for a closer comparison, is that Hilbert was speaking about a ''purely mathematical'' "''yes or no''" questions, while unpredictability shows up in the relation between a physical system and a mathematical set of equations (or evolution function). That is, in order to give unpredictability, Poincaré's Negative Result, as he called his proof of the non-analyticity of the equations for three-body system, needs a reference to physical measure. Measure is always, in classical (and relativistic) physics, an interval, that is an approximation. And non-observable initial fluctuations may give ''observable'', thus unpredictable, evolutions, in presence, typically, of non-linearity of the mathematical modelling (main reasons: the initial interval expands exponentially - this is measured by Lyapounov exponents – and it is "mixed"—its order is not preserved).
\\ \\
In order to relate consistently unpredictability to undecidability, one needs to effectivize the dynamical spaces and measure theory (along the lines of Lebesgues’s measure), the loci for dynamic randomness. This allows to have a sound and purely mathematical treatment of the epistemological issue (and obtain a convincing correspondence between unpredictability and undecidability). I will go back to the work on this while answering your next question, on randomness.
\\ \\
As for Gödel's incompleteness, when studying Poincaré’s theorem, I understood that the two results share also a methodological point: they both destroy the conjecture of predictability/completeness from inside. Poincaré does not need to refer concretely to a physical process that would not be predictable, by measuring it "before and after". He shows, from the pure analysis of the equations, that the resulting bifurcations and homoclinic intersections (between stable and unstable manifolds) lead to deterministic unpredictability (of course, the equations are derived in reference to three bodies in their gravitational Helds, similarly as Peano Axioms are invented in reference to the ordered structure of numbers).
Gödel as well, by playing the purely formal game, formally constructs an undecidable sentence, with no reference what so ever, in the ''statements'' and ''proofs'' in his 1931 paper, to "semantics”, "truth" or alike, that is to the underlying mathematical structure.
\\ \\
Modern "concrete incompleteness" theorems (that is, Godel-Girard’s normalisation, Paris-Harrington or Friedman-Kruskal theorems) resemble instead Laskar’s results of the ’90s, where "concrete unpredictability" is shown for the solar system. In reference to the best possible astronomical measures, Laskar shows that the evolution of our beloved system is provably unpredictable, globally, beyond one million years (one hundred years, when considering only Earth). Similarly, concrete incompleteness was given by proving (unprovability and) truth over the (standard) model.
\\ \\
More generally, I view the incompleteness of our formal (and equational) approaches to knowledge a fundamental epistemological issue. And this why we permanently need new science: by inventing new principles of conceptual constructions we change directions, propose new intelligibilities, grasp or organise new fragments ofthe World. There no such a thing as "the final solution to the foundational problem" in mathematics (as Hilbert dreamed—a true nightmare), nor in other sciences.
\\ \\
And finally, then, my other current interest, biology. The "incompleteness" of the molecular theories for understanding life phenomena is a similar issue. No way to understand/derive completely embryogenesis nor phylogenesis (evolution) by looking only at the four letters of the bases of DNA (the formal language of molecular biology). More precisely, in this very different context, "completeness" philosophically corresponds to the largely financed myth that the ''stability'' and the ''organisation'' of the DNA and the subsequent molecular cascades completely determine the ''stability'' and the ''organisation'' of the cell and the organism. This is false, since the ''stability'' and the ''organisation'' of the cell and the organism causally contribute to the ''stability'' and the ''organisation'' of the DNA and the subsequent molecular cascades. Thus the analysis of the global structure of the cell (and the organism) must parallel the absolutely crucial molecular analyses. The hard philosophical point to explain now, to my friends in molecular biology, is that "incomplete" does not mean useless (well, I worked most of my life in Type Theory, lambda-calculus and related formal systems .. . ), but that we badly need also an autonomous theory of organism and further develop the (fantastic) darwinian theory of evolution.
\\ \\
By the way, randomness plays a crucial role in evolution, but also, and it is increasingly believed so, in embryogenesis. But ...what kind of randomness? physics, classical/quantum, proposes two distinct notions of randomness ....
\\ \\
__CC__: What aspects of randomness interest you?
\\ \\
__GL__: Classical (physical) randomness is unpredictability of deterministic systems in finite time (dice trajectories are perfectly determined: they follow the Hamiltonian, a unique geodetics; yet, they are very sensitive to initial and contour conditions ...: it is, in general, not worth writing the equations). Now, Martin-Löf’s (and Chaitin’s) number-theoretic randomness is for infinite sequences. How may this yield a connection then between Poincaré’s unpredictability and Gödel’s undecidability?
\\ \\
As I said, physical randomness, as deterministic unpredictability, is a matter at the interface "equations/process" and shows up at finite time. Yet, also physical randomness may be expressed as a limit or asymptotic notion and, by this, it may be soundly turned into a purely mathematical issue: this is Birkhotf’s ergodicity (for any observable, limit time averages coincide with space averages), That is, physical randomness, as a mathematical limit property, lives in formal systems of equations or evolution functions: in their measurable spaces, they may engender infinite random trajectories or generic points, in the ergodic sense. And this sense applies in (weakly chaotic) dynamical systems, within the frame Poincaré’s geometry of dynamical systems.
\\ \\
As for algorithmic randomness, Martin-Löf randomness is a "Gödelian" notion of randomness, as it is based on recursion theory and yields a strong form of undecidability for infinite 0-l sequences (in short, a sequence is random if it passes all ''effective statistical tests''). Recently, M. Hoyrup and C. Rojas, under Galatolo’s and my supervision, proved that dynamic randomness (a la Poincaré, thus, but at the purely mathematical limit, in the ergodic sense), in suitable ''effectively given'' measurable dynamical systems, is equivalent to (a generalisation of) Martin-Löf randomness (Schnorr’s randomness). This is a non-obvious result, based also on a collaboration with P, Gacs, spreading along two "entangled" doctoral dissertations (defended in June 2008, a nice example on how two collaborating individuals may produce more than the "double").
\\ \\
In the last three years, I have been teaching a course at ENS, in Paris (and once in Rome III), along these parallel lines, from Poincaré and Gödel to algorithmic randomness. The course (the program is on my web page) was organised with a colleague in quantum mechanics at ENS, Thierry Paul: we alternated one two hours lecture each and he introduced the EPR paradox (Einstein’s and others’ paper on entanglement) and its modern consequences, quantum computing. As Thierry moved to Polytéchnique, I took up part of his lectures since we are doing somejoint work on a logical and (modern) physical understanding of EPR, By the way, EPR is dedicated to prove the incompleteness (E) of QM. Their argument is (beautiful, but) wrong as it is based on the impossibility of entanglement.
\\ \\
As for quantum randomness, note now that, because of entanglement, it differs from classical: if two classical dice interact and then separate, the probabilistic analysis of their values are independent. When two quanta interact and form a "system", they can no longer be separated: measures on them give correlated probabilities of the results (mathematically, they violate Bell’s inequalities).
\\ \\
__CC__: What is the link between computability, continuity and Church-Turing thesis?
\\ \\
__GL__: The idea hinted in the book and in several papers with Bailly and Paul, two physicists, is that the mathematical structures, constructed for the intelligibility of physical phenomena, according to their continuous (mostly in physics) or discrete nature (generally in computing), may propose different understandings of Nature.
\\ \\
In particular, the "causal relations”, as structures of intelligibility (we "understand Nature" by them), are mathematically related to the use of the continuum or the discrete and may deeply differ (in modern terms: they induce different symmetries and symmetry-breakings).
\\ \\
But what discrete (mathematical) structures are we talking about? I believe that there is one clear mathematical dennition of “discrete": a structure is ''discrete'' when the discrete topology on it is "natural". Of course, this is not a formal definition, but in mathematics we all know what "natural" means. For example, one can endow Cantor’s real line with the discrete topology, but this is not "natural" (you do not do much with it); on the other hand, the integer numbers or a digital data base are naturally endowed with the discrete topology (even though one may have good reasons to work with them also under a different structuring).
\\ \\
Church’s thesis, introduced in the ’30s after the functional equivalence proofs of various formal systems for computability, concerns only computability over integers or discrete data types. As such, it is an extremely robust thesis: it ensures that any sufficiently expressive ''finitistic formal system'' over integers (a Hilbertian-type logic-formal system) computes exactly the recursive functions, as defined by Gödel, Kleene, Church, Turing ..., This thesis therefore emerged within the context of mathematical logic, as grounded on formal systems for arithmetic computations, on discrete data types.
\\ \\
The very first question to ask is the following: lf we broaden the formal frame-work, what happens? If we want to refer to continuous (differentiable) mathematical structures, the extension to consider is to the computable real numbers. Are, then, the various formalisms for computability over real numbers equivalent, when they are maximal? An affirmative response could suggest an extension of Church thesis to computability on "continua". Of course, the computable reals are countably many, but they are dense in the "natural" topology over Cantor’s reals, a crucial difference, as we shall see.
\\ \\
With this question, we begin to get near to physics, since it is within spatial and often also temporal continuity that we represent dynamical systems, that is, most mathematical models for classical physics. This does not imply that the World is continuous, but only that we have said many things thanks to continuous tools as very well specified by Cantor (but his continuum is not the only possible one: Lawvere and Bell, say, proposed another without points).
\\ \\
Now, from this equivalence of formalisms, at the heart of Church’s thesis, there remains very little when passing to computability over real numbers: the theories proposed are demonstrably different, in terms of computational expressiveness (the classes of defined functions). The various systems (recursive analysis, whose ideas were first developed by Lacombe and Grezgorzcyk, in l955-57; the Blum, Shub and Smale, BSS, system; the Moore-type recursive real functions; different forms of "analog" systems, among which threshold neurones, the GPAC ...) yield different classes of "continuous" computable functions, Some recent work established links, reductions between the various systems (more precisely: pairwise relations between subsystems and/or extensions), yet, the full equivalence as in the discrete case is lost. Moreover, and this is crucial, these systems have no "universal function" in Turing’s sense. And this, for a fundamental reason, which has to be analysed closely.
\\ \\
If one endows non-trivial space continua with the interval topology (the "real" topology), there is no way to have an isomorphism between spaces of different dimension (see below). This isomorphism, instead, is needed for having the universal map and, in general, for computability on the discrete. Its work spaces may be of any finite dimension: they are all eifectively isomorphic, "Cartesian dimension" does not matter!
\\ \\
This is highly unsuitable for physics. First, the dimensional analysis is a fundamental tool (one cannot confuse energy with force, nor with the square of energy ...), Second, dimension is a topological invariant, in all space manifolds for classical and relativistic physics. This is shown by the fact that if two such spaces have isomorphic open subsets, then they have the same dimension. This is one of the most beautiful correspondence between mathematics and physics. Take physical measure, which is always an interval (it is approximated, by principle, classically), as a "natural” starting point for the metric (thus the interval topology), then you ''prove'' that this crucial notion for physics, dimension, is a topological invariant. Discrete computability destroys this: a cloud of isolated points has no dimension, per se, and you may, for all theoretical purposes, encode them on a line. When you have dimension back, in computability over continua, where the trace of the interval topology maintains good physical properties, you loose the universal function and the equivalence of systems. Between the theoretical world of discrete computability and physico-mathematical continua there is a huge gap.
\\ \\
One cannot even extend to the second a sound form of Church Thesis. While I believe that one should do better than Cantor as for continua, I would not give a penny for a physical theory where dynamics takes place only on discrete spaces, departing from physical measure, dimensional analysis and the general relevance of dimensions in physics (again, from heat propagation to mean field theory, to relativity theory...space dimension is crucial).
\\ \\
As for the relevance of the discrete, quantum mechanics started exactly by the discovery of a key (and unexpected) discretisation of light absorption or emission spectra of atoms, Then, a few dared to propose a discrete lower bound to measure of ''action'', that is of the product energy x time. It is this physical dimension that bares a discrete structure. Clearly, one can then compute, by assuming the relativistic maximum for the speed of light, a Planck’s length and time. But in no way space and time are thus organised in small "quantum boxes". And this is the most striking and cmcial feature of quantum mechanics: the "systemic" or entanglement effects, which yield inseparability of observables. No discrete space topology is natural. That is, these quantum effects are the opposite of a discrete, separated organisation of space, while being at the core of its scientific originality.
\\ \\
In particular, they motivate quantum computing (as well as our analysis of quantum randomness above). As a matter of fact, Thierry Paul and I claim that the belief in an absolutely separable topology of space continua is Einstein’s mistake in EPR.
\\ \\
A final remark. In general, the discrete is not an approximation of classical continua. In even weakly chaotic systems, a difference by approximation (below measure, typically) quickly leads to major (observable) differences in evolutions. And the approximation relation is at most reversed. In some, not all, dynamical systems, the "shadowing lemma" holds: for any discrete trajectory, there is continuous one approximating it. The quantification is the other way round. This is an important result in Numerical Analysis, as it guaranties that a discrete trajectory on the screen is not meaningless: one can find a continuous one approximating it. Of course, it does not start with the same initial point, in general.
\\ \\
In summary, continua, Cantorian or not, take care rather well (they are not an absolute) of the approximated nature of physical measure, which is represented as an interval: the unknowable fluctuation is within the interval. Classically, I insist, the relevance of measure is derived from Poincaré’s results (changes below measure induce major differences over time), And physical measure is our only form to access "reality". The arithmetizing foundation of mathematics went along another (and very fruitful) direction, based on perfectly accessible data types. Poincaré firmly opposed to the underlying philosophy of knowledge, by deep, but informal, reflections on "action” in the physical world.
\\ \\
__CC__: We have entropy and negative entropy. You invented anti-entropy.
\\ \\
__GL__: Traditionally, information is considered as negentropy (Brillouin). Then, by definition:
#the sum of a quantity of information (negentropy) and an equal quantity of entropy gives 0;
#information (Shannon, but also Kolmogorov) is "insensitive to coding" (one
can "encrypt" and "decrypt" as much as one wishes but the information content will not be lost/gained, in principle).
\\
I believe however that this notion, of which the applications are numerous, is not sufficient for an analysis of the living state of matter. DNA (usually considered as digital information) is the most important component of the cell, as I said, but it is necessary to analyse the ''organisation'' of the organism, as an observable specific to biological theorisation.
\\ \\
Here, the collaboration with Francis Bailly, a physicist also interested in biology, has been very important. He actually was my teacher in many aspects of natural sciences (Francis recently passed away: a recorded conference in his memory may be accessed from my web page). Concerning biological (morphological) complexity, we have proposed the notion of anti-entropy to define it (or quantify it in terms of complexity of cellular, functional and phenotypical differentiation). In short, biological complexity may be understood as "information specific to the form", including the intertwining and enwrapping of levels of organisation. Its use in metabolic balance equations has produced a certain number of results mentioned in a recent long article. We have, in particular, examined systems far from equilibrium and analysed diifusion equations of biomass over biological complexity as anti-entropy, following Schrodinger`s "operational method" in quantum mechanics. This has enabled to operate a mathematical reconstruction of this diffusion, which corresponds to the paleontological data presented by Gould for the evolution of species.
\\ \\
Anti-entropy is compatible with information as negentropy, but it must be considered as a ''strict'' extension, in a logical sense, of the thermodynamics of entropy. Typically, the production of entropy and that of anti-entropy are summed in an "extended critical singularity", an organism, never zero, in contrast to Brillouin’s and others’ negentropy. As it is linked to spatial forms, anti-entropy is "''sensitive to coding''", contrarily to digital information (it depends on the dimensions of embedding manifolds, on folds, on singularities   ).
\\ \\
In short, over the last six years, in several collaborations and by supervising four theses, we have compared physical (dynamic) randomness with algorithmic randomness (at the center of ''algorithmic theories of information''); we worked at a theory of "extended criticality“ (living objects persist in an “extended critical state"); we have added anti-entropy (a "''geometrical'' extension" of the notion of information) to thermodynamic (in)equalities and balance equations; we have begun modelling biological rhythms and time in two-dimensional manifolds, a sort of non-trivial geometrization of time (and, perhaps, a quite useful one, for the digital simulation of cardiac rhythms I am developing with a PhD student, M. Montevil). The scientific finality of this work may also entail some epistemological consequences, I hope: it should participate to the epistemological debate regarding the notion of information, the updating of its theoretical principles, as part of the many existing interactions with physics and biology. A a possible outcome of these interactions could be to start thinking to . . . the next machine. Aren’t we a little tired of this, nice, but rather old "Discrete Data Types Machine"?
\\ \\
__CC__: Is it possible to summarise the ideas of your book ''Mathématiques et sciences de la nature. La singularité physique du vivant'' (Hermann, Paris, 2006) with F. Bailly? Are you planning an English version?
GL: Yes, there is an ongoing translation in English. In the book, Francis and I attempt to identify the organising concepts of some physical and biological phenomena, by means of an analysis of the foundations of mathematics and of physics, in the aim of unifying phenomena, of bringing different conceptual universes into dialog. The analysis of the role of ‘“order" and of symmetries in the foundations of mathematics is linked to the main invariants and principles, among which the geodesic principle (a consequence of symmetries), which govern and confer unity to the various physical theories. Moreover, we attempt to understand causal structures, a central element of physical intelligibility, in terms of symmetries and their breakings. The importance of the mathematical tool is also highlighted, enabling us to grasp the differences in the models for physics and biology which are proposed by continuous and discrete mathematics, such as computational simulations.
\\ \\
As for biology, being particularly diflicult and not as thoroughly examined at a theoretical level, we propose a "unification by concepts", an attempt which should always precede mathematisation, that we later tried in some papers. This constitutes an outline for unification also basing itself upon the highlighting of conceptual differences, of complex points of passage, of technical irreducibilities of one field to another. Indeed, a monist point of view such as ours should not make us blind: we, the living objects, are surely just big bags of molecules or, at least, this is our main metaphysical assumption. The point though is: which theory can help us to better understand these bags of molecules, as they are, indeed, rather funny (singular?), from the physical point of view. Technically, this singularity is expressed by the notion of "extended criticality", a notion that logically extends the pointwise critical transitions in physics.
\\ \\
__CC__: In what sense do you think physical or biologically processes "compute"?
\\ \\
__GL__: The Discrete State Machines that compute are a remarkable invention, based on a long history. As I hint in the paper "Critique of Computational Reason in the Natural Sciences", this story begins with the invention of the alphabet, probably the oldest experience of discretisation. The continuous song of speech, instead of being captured by the design of concepts and ideas (by recalling "meaning“, like in ideograms), is discretised by annotating phonetic pitches, an amazing idea (the people of Altham, in Mesopotamia, 3300 B.C.). Meaning is reconstructed by the sound, which acts as a compiler, either loud or in silence (but only after the IV century A.D. we learned to read "within the head" !).
\\ \\
I insist that the crucial feature of alphanumeric discretisation is the invention of a discrete coding structure, which is far from obvious. Think also of the originality of Godel-numbering, an obvious practice now, but another remarkable invention. Turing’s work followed: the Logical Computing Machine (LCM), as he first called it, at the core of our science (right/left, O, 1 , , . ), Of course, between the alphabet and Turing, you also have Descartes "discretisation" of thought (stepwise reasoning, along a discrete chain of intuitive certitudes . , .) and much more.
\\ \\
When, after 1948 or so, Turing gets again interested in physics, he changed the name to his LCM: in the 1950 and 1952 papers, he calls it Discrete State Machine (this is what matters for its physical behaviour). And twice in his 1950 paper (the "imitation game"), he calls it "Laplacian", lts evolution is theoretically predictable, even if there may be practical unpredictability (too long programs to be grasped, says he).
\\ \\
So, we invented an incredible stable processor, which, by working on discrete data types, does what it is expected to do. And it iterates, very faithfully, Primitive recursion and portability of software are forms of iterability: iteration and update of a register, do what you are supposed to do, respectively, even in slightly different contexts, over and over again. For example, program the evolution function of the most chaotic strange attractor you know, Push "restart": the digital evolution, by starting on the same initial digits, will follow exactly the same trajectory (on a paper on Turing’s imitation game I discuss the simulation of the double pendulum, a chaotic device), This makes no physical sense, but it is very useful (also in meteorology: you may restart your turbulence, exactly, and try to better understand how it evolves .,.). Of course, you may imitate unpredictability by some pseudo-random generator or by , . . true physical randomness, added ad hoc. But this is cheating the observer, in the same way Turing’s ''imitation'' of a woman’s brain is meant to cheat the observer, not to "''model''" the brain. He says this explicitly, all the while working in his 1952 paper, at a ''model'' of morphogenesis, as (non-)linear dynamics, Observe, finally, that our colleagues in networks and concurrency are so good that programming in network is reliable: programs do what they are supposed to do, they iterate and . . . give you the web page you want, identically, one thousands time, one million times. And this is hard, as physical space-time, which we better understand by continua and continuous approximations, steps in, yet still on discrete data types, which allow perfect iteration.
\\ \\
Those who claim that the Universe is a big digital computer, miss the originality of this machine of ours. It is like believing that, when we speak, we produce sequences of letters: this is a cartoon’s vision of language and misses the originality of our invention, the alphabet, an early musical notation (Chinese children have a different view: their cartoons’ bubbles evoke concepts), When we construct computers, we make the far from obvious miracle of producing a reliable, thus programmable, physical device, iterating as we wish and any time we wish, even in networks. One should not miss the principles that guided this invention, as well as the principles by which we understand physical dynamics.
\\ \\
By the way, are the main physical constants, ''G, c, h, computable'' (real numbers)? It depends on the choice of the reference system and the metrics, of course. So, fix ''h'' = l, Then, you have to renormalise all metrics and re-calculate, by equations, dimensional analyses ''and'' physical measure, ''G'' and ''c''. But physical measure will always give an interval, as we said, or, in quantum frame, the probability of a value, If one interprets the classical measure interval as a Cantorian continuum, the best way, so far, to grasp Huctuations, then ...where are ''G'' and ''c''? Non-computable reals form a set of Lebesgues measure 1 ....
\\ \\
Yet, the most striking mistake of many "computationalists” is to say: but, then, some physical processes would super-compute (compute non-computable functions)! No, this is not the point. Most physical processes, simply do not ''define'' a mathematical function. In order to have a classical process to define a function, you have to fix a time for input, associate a (rational) number to the interval of measure and ...let the process go. Then you wait for the output time and measure again. In order, for the process, to define ''f(x)'' = ''y'', at a rational input ''x'' it must always associate a rational output ''y''. But if you restart, say, your physical double pendulum on ''x'', that is within the interval of the measure which gave you ''x'', a minor (thermal, say) fluctuation, ''below that interval x'', will yield a different observable result ''y''' after very short time. So, a good question would be, instead: consider a physical process that ''defines'' a (non-trivial) function, is this function computable?
\\ \\
The idea then would be that the process is sufficiently insensitive to initial conditions (some say: robust) as to actually define a function. But, then one should be able to partition the World in little cubes of the smallest size, according to the best measure as for insensitivity (fluctuations below that measure do not affect the dynamics). If the Accessible World is considered finite (but . . . is it?), then one can make a list out of the finite input-output relation established by the given process.This is a "program": is it compressible.
\\ \\
As for biology, what can I say? 60% of fecundations in mammals fail (to not reach a birth): a very bad performance for the DNA as a program. While iterability is at the core of software (and hardware) design, our fantastic invention, the key principle for understanding life, at the phenotypic level, is ''variability'', a form of non-iterability. It is crucial for evolution, but also onlogenesis, that a cell is never identical to the mother cell, So, the principles of intelligibility are the exact opposite: the failure of most fecundations corresponds to the possibility that a mutant better tits a changing environment (affecting the mother’s womb, say). Of course, some molecular processes iterate, but there is an increasing tendency to analyse molecular cascades in terms of statistical phenomena (and this is where good computational imitations may help to understand, by some use of pseudo-randomness or by networks interactions). This opens the way to an increasing role of epigenetics and, thus, to the relevance of downwards regulating effects, from the cell and the organism to DNA expression.
\\ \\
__CC__: You argue that incomputability phenomena are more important to physics than computable ones? After all, the laws of physics seem more computable than incomputable?
\\ \\
__GL__: Your second question refers to the effectiveness of our ''mathematical writing'' of physical invariants: of course, equations, evolution functions ...are given by sums, products, exponents, derivations, integrations . . . all effective operations. Moreover, no one is so crazy to put an incomputable real as a coefficient or exponent in an equation (even if ''h'' could be so . . . ). This gives us remarkable approximations and, most often, qualitative information: Poincaré's geometry of dynamical systems or Hadamard’s analysis of the geodetic flow on hyperbolic surfaces, do not give predictions, but very relevant global information (by attractors, for example, or regularities in fiows . . . that we beautifully see today, as never before, by fantastic approximations, "shadowed" on our computers screens).
\\ \\
I do not know (absolute) laws of Nature, but our constructive theorising on the phenomenal veil, at the interface between us and the World. This active constructions are of course effective (we use the alphabet, effective operations and codings, I insist). While predictable processes are not many in Nature: you can predict a few forthcoming Eclipses, at human time scale, but the Solar System is chaotic in astronomical times, as Poincaré proved and Laskar quantified (and computed!) Unpredictable ones are the mathematical and computational challenge. And a computable physical process is, by definition, deterministic and predictable. In order to predict (pre-dicere, "to say in advance" in Latin), just "say" or write the corresponding program and compute in advance; more precisely, the results discussed above, by showing the equivalence of unpredictability and (strong) undecidability, ML-randomness, prove this fact, by logical duality. Unpredictability may pop - out in networks and this because of physical space-time (we then make them computable and predictable-reliable by forcing semaphores, handling inter-leaving ...). In Nature, many (most, fortunately) processes escape predictions, thus our computations. Fortunately, otherwise there would be no change, nor life in particular: randomness is crucial. And when we compute unpredictable evolutions, we just approximate their initial part. as I said, or give qualitative information, both very relevant tasks. But engineers put some more cement than computed, to take care of vibrations below measure ...
\\ \\
Now, the only mathematical way I know to define randomness, in classical physics, is Birkhoff’s ergodicity. But it is very specific (certain dynamics). Otherwise, randomness is given in terms of probability measure. But this is unsatisfactory, as probability gives a ''measure'' of randomness, not a definition. It is the theory of algorithms, thanks to Martin-Löf, Chaitin and you, that gave a fully general, mathematical, notion of randomness, as a strong form of incomputability, independently of probability theory. Again, physical (classical) randomness is deterministic unpredictability and, by the results above and more in the literature, the role of computational randomness further comes to the limelight. In particular, it provides a very flexible theory of randomness: you can adjust the class of effective randomness tests (Martin-Löf, Schnorr . . . and many more). Our joint hope is that this may help to better grasp, for example, the mathematical difference between classical and quantum randomness.
\\ \\
__CC__: If all papers and books would be destroyed by a disaster, but you could keep just one, which one would you choose? Why?
\\ \\
__GL__: I do not think I would survive to this, but, just to give a partly random answer: Weyl’s “Philosophy of Mathematics and of Natural Sciences". Along the lines of "Das Kontinuum", it radically departs form the Hilbertian alphabetic myths, Mathematics, actually human thought, for formalists and computationalists, is reducible to the matching and replacement of sequences of letters: no geometric judgements, no association of gestalts   this is why this Laplacian mechanics of thought is incomplete. The proof has also a “geometric structure", a remark by Poincaré, and, by this, it is "sensitive to codings". This is also why its formal coding is incomplete. Reasoning is not a chain whose strength is that of the weakest ring, as Descartes claimed, but a network, a rope made of many interlaced wires, as suggested by Peirce, reinforcing each other and coupling to meaning and to forms of life. And Weyl globally develops a deep and broad philosophy of knowledge, well beyond the parody of his views in predicativist or intuitionistic terms.
CC: If all your papers and books would be destroyed by a disaster, but you could
keep just one, which one would you choose'? Why?
\\ \\
__GL__: The recent paper on "anti-entropy", where some (minor) aspects of Evolution are mathematically described, because . ,. it is the last one and because, while working at it, I increasingly learned to love Darwin.
\\ \\
__CC__: Many thanks.


\\

----

!Any further pages in alphabetic order of their title as created by you.\\

Just click at "Create new page", then type a short title and click OK, then add information on the empty page presented to you (including maybe a picture from your harddisk or a pdf-file by using the "Upload" Button) and finally click at "Save".\\
[{CategoryIndexPlugin category='User/Longo_Giuseppe/OtherInformation'}]