In this post, I want to discuss some of David Deutsch’s positions from “The Beginning of Inifinity”.
“The Beginning of Infinity” is a bold, extremely optimistic hymn to science, and to the growth of human knowledge in general. In fact, one of the key propositions of the book is that the paradigm of seeking good explanations through creativity and criticism (which humanity has used in science since the Enlightenment with astonishing success) applies to art, philosophy, and history as well.
“The Beginning of Infinity” is a rare kind of book which I disagreed a lot with yet wanted to read it until the end.
Deutsch is a scientific realist: he thinks that true explanations of phenomena in science (as well as art, philosophy, and history) do exist, even though we may never actually attain to them but only get ever closer to the truth.
Deutsch labels instrumentalism (a view that science cannot possibly come up with explanations of anything, only predictions) and behaviorism (which is instrumentalism applied to psychology: a view that pretending to be able to think is the same as to think) as misconceptions.
I don’t remember if Deutsch tried to logically prove this position in the book, so I cannot point out in what exactly I disagree with Deutsch.
Personally, I believe in instrumentalism and behaviorism because I also believe in the Computable Universe Hypothesis, a. k. a. digital philosophy. If the reality is equivalent to computation, saying that there are true scientific explanations, that is, exact models of phenomena, would be equivalent to saying that we can “outcompute” the universe. The basic structure of the universe’s computation might be so that this is theoretically possible, indeed. But this is not guaranteed to be possible for any possible structure of the universe’s computation, because this problem is equivalent to the halting problem.
To quote Stephen Wolfram:
And actually even before that, we need to ask: if we had the right rule, would we even know it? As I mentioned earlier, there’s potentially a big problem here with computational irreducibility. Because whatever the underlying rule is, our actual universe has applied it perhaps 10⁵⁰⁰ times. And if there’s computational irreducibility — as there inevitably will be — then there won’t be a way to fundamentally reduce the amount of computational effort that’s needed to determine the outcome of all these rule applications.
However, Wolfram immediately goes on to say:
But what we have to hope is that somehow — even though the complete evolution of the universe is computationally irreducible — there are still enough “tunnels of computational reducibility” that we’ll be able to figure out at least what’s needed to be able to compare with what we know in physics, without having to do all that computational work. And I have to say that our recent success in getting conclusions just from the general structure of our models makes me much more optimistic about this possibility.
But Wolfram’s optimism here is not even a philosophical position, it’s just his attitude. The philosophical premise remains that the laws of physics, even if somehow deducible from the basic structure of the universe’s computation, will remain models of what’s happening, and not absolute truths. Models naturally have bounds of applicability, as, for example, the general relativity and the quantum mechanics apply on different scales. Models may also cease to apply under certain conditions, and it’s not just the poster examples of the Big Bang and black holes. For example, John Barrow writes:
There exist equilibria characterised by special solutions of mathematical equations whose stability is undecidable. In order for this undecidability to have an impact on problems of real interest in mathematical physics the equilibria have to involve the interplay of very large numbers of different forces. While such equilibria cannot be ruled out, they have not arisen yet in real physical problems. Da Costa and Doria went on to identify similar problems where the answer to a simple question, like ‘will the orbit of a particle become chaotic’, is Gödel undecidable. They can be viewed as physically grounded examples of the theorems of Rice and Richardson which show, in a precise sense, that only trivial properties of computer programs are algorithmically decidable.
Truth (including about physics theories) can only be constructive, or computable.
Is GPT-3 deepfaking understanding?
Whether GPT-3 really has common sense knowledge or just “deepfaking understanding” is an instance of instrumentalism vs. realism debate.
Can the universe carry out an infinite amount of computation?
Deutsch writes that even if the universe is finite (and he also discusses in the book that it can be infinite), regardless of whether the universe is heading to Big Crunch or Big Chill, it’s possible to perform an infinite amount of coherent computation “until the end”. Therefore, an infinite amount of knowledge can be acquired through this computation.
I can see how infinite computation can be performed in the Big Chill scenario (which is the most probable one, scientists think). In this case, cosmic-scale structures will acquire knowledge by exchanging information with ever-growing latencies: years, then millions of years, then millions of millions of years, etc. However, for such structures, “knowledge” likely means something very different than it means for us. As Stephen Wolfram wrote:
In thinking about our “place in the universe” there’s also another important effect: our brains are small and slow enough that they’re not limited by the speed of light, which is why it’s possible for them to “form coherent thoughts” in the first place. If our brains were the size of planets, it would necessarily take far longer than milliseconds to “come to equilibrium”, so if we insisted on operating on those timescales there’d be no way — at least “from the outside” — to ensure a consistent thread of experience.
From “inside”, though, a planet-size brain might simply assume that it has a consistent thread of experience. And in doing this it would in a sense try to force a different physics on the universe. Would it work? Based on what we currently know, not without at least significantly changing the notions of space and time that we use.
However, I don’t see how an infinite amount of computation would be possible in the Big Crunch scenario (unless the universe is infinite itself). At some point when, when the diameter of the universe will be about the Plank length (or even probably much larger) the universe will not be able to perform any more computation. For infinite computation to work out in the Big Crunch scenario, the physics should have supported infinitely divisible space and matter and information carriers, Achilles-and-tortoise style, which is not the case.
Infinity as an idealistic idea
Deutsch criticises finitism: “Finitism just prevents us from understanding entities beyond our direct experience.” As you can see, Deutsch connects infinity and knowledge by substantiating them with one another.
Deutsch’s argument about finitism is an idealistic, almost a theistic statement. Compare: “Atheism just prevents us from understanding entities beyond our direct experience.” As far as I understand, this is Georg Cantor who started to think deeply about the relationship between God and infinity.
As Deutsch is also a realist, his philosophy is an interesting mix of realism and idealism, positions that are usually considered opposite to one another. Fred M Beshears arrives at the same conclusion from a different angle:
Although David Deutsch is a follower of Popper, his own philosophy is more of a synthesis of Idealism and Realism. In particular, Deutsch argues that some abstractions — those that are referred to by our best explanation of some field — should be considered to be ‘objectively’ real even though they are not ‘physically’ real. So, Deutsch does not think that ideas are less fundamentally real than external physical entities, or vice versa. Both are fundamental.
Notice that the title of this book is The Beginning of Infinity: explanations that transform the world. A hard core realist would say that coming up with a better explanation may transform our understanding of the world, but it wouldn’t transform the world itself. So, it doesn’t sound like Deutsch is a hard core realist. But, he’s not a hard core idealist either.
I think that infinity, of course, does exist as an abstraction, a model, and it can be a useful part of other models. However, I think Deutsch is wrong when he states that the universe, computation, and growth of knowledge are infinite. We can apply his own attitude towards theories: they are all wrong, and we don’t know what discoveries await us in the future. For example, if physicists will confirm proton decay, then infinite computation might not be possible in any universe.
An extreme application of this principle is that if we live in a simulation, it can be stopped abruptly “without any notice”. Hence, even if according to the laws of physics the universe could exist forever, we can never be sure it indeed will.
This post was originally published on Substack.