determinism = g(x, t_future) fully set by g(x, t_now) and g
if you model a geometric, g : Real -> Real with computable, c : Int -> Int then there are gaps at arbitrarily high precisions, say p (eg., p = delta(g, c) at (x, t))
construct a classical system of arbitrarily complexity (eg., 10^BIG interactions), describe each interaction with g. Since 10^BIG are required, "delta(g, c) < BIG" is required in order for the system to remain deterministic (ie., described by g). We can easily find cases where BIG > delta(g, c), so CM would be non-deterministic if g is replaced by c.
As for QM, these "gaps" are cause much deeper contradictions with premises of QM.
If you replace wavefunctions, g, with computable ones, c then they dont sum to solutions of the wave-eq, so QM fails to be linear (the detla(g,c) are massive because hibert space is infinite-dim).
Now it might be that reality is really computable in the sense that there's some c which can replace g, but this would violate the assumptions of physics and has no motivation. Physics might be wrong, but there's no evidence of that.
There are also other issues, but these are just two off the top of my head.
Refences: Look for physical church-turing, church-turing thesis, non-det and det in chaos theory, non-det in classical mechanics, physical interpretations of the reals -- this will be in postgrad work, it wont be in popsci books.
> if you model a geometric, g : Real -> Real with computable, c : Int -> Int then there are gaps at arbitrarily high precisions, say p (eg., p = delta(g, c) at (x, t))
Nobody takes "computable approximation to g: R -> R" to mean "a computable function c: R* -> R" where R is the computable reals. There are many mathematical issues with this caused by self-referential programs (realised by Turing himself in "On Computable Numbers"). Typically you would model it as "c: R* x Q -> R*" where Q is a rational describing your desired precision, right?
> Since 10^BIG are required, "delta(g, c) < BIG" is required in order for the system to remain deterministic (ie., described by g).
I'm not sure what you mean by this - the computable approximation "c" is deterministic essentially by definition. If you mean "in order to remain within some bound of g" I can kinda see what you're saying but in that case you can interleave computations with smaller and smaller precisions (the "Q" I mentioned) in order to work around that issue, right? It won't be efficient, but it will certainly be computable.
> Refences: Look for physical church-turing, church-turing thesis, non-det and det in chaos theory, non-det in classical mechanics, physical interpretations of the reals -- this will be in postgrad work, it wont be in popsci books.
Thanks! I don't know much chaos theory, I'll have a look around for a good textbook.
Edit: I just want to say - you have a pretty wild way of writing that makes it hard for me to tell if you're a crank or not. Either way, reading your posts here has given me a ton of food for thought =) what's your background?
1 yr medicine, 6 yr physics, 4 yr debating union, 20 yr c programming, 20 yr love of political and stand up comedy, 15 yr software eng, 10 yr data scientist, 15 yr python, 22 yr informal & formal philosophy, 8 yr data sci & software consult/coach to finance/defence/... and maybe soon, 4 yr PhD AI & HCI
Of those, you may decide which is the most relevant to my writing style. The amount of theatrics and irony in a live delivery might change the interpretation.
Replacing R with Q is just replacing it with (Int, Int) -- so be it. My claim concerns whether CM assumes determinism (it does) and therefore requires infinite precision, any gap whatsoever that goes missing means P(t_next|t_now) < 1
You might say this indicates reality doesnt follow CM, and so that CM is wrong and (some now less hegemonic views) of QM are correct -- reality isnt deterministic.
Fine, but QM makes the situation worse. Since it's linearity now under threat: we would not be able to compose QM systems linearly if the wavefns didnt have infinite dim.
One important assumption here is that we ought take the explicit and implicit assumptions of physics as given as our starting point, ie., Prior(Physics) = High, and Posterior(NotPhysics|Physics) = Low.
So the dialectical burden is on the "computationalists" to show that there is a workable theory of physics, at every level which preserves either (1) the assumptions of physics; or (2) motivates why those assumptions are wrong non-circularly.
Given the premise on priors above, the argument, "physics is wrong because reality is computational" is both circular and unpersuasive (this doesnt mean its wrong, just that no reason has been presented).
Wild, well, I'm a lowly math PhD so that's where my interests lie =)
I'm _not_ suggesting we replace R with Q. I'm suggesting that you bake in the desired accuracy of your computational approximation as an input. This is how Turing evades self-referential problems in his conception of computational reals, and also perhaps how you evade your criticisms with CM requiring infinite precision.
Similarly - I think it's reasonable in a computational context to assume linearity up to an error bound that is provided as an input. Of course things become non-computable if you ask for exact linearity. Equality itself is non-computable!
Either way I think we agree about physics. I don't believe the universe is describable as a computable function. Merely that we can approximate it to arbitrary degrees of accuracy =P
I think we teach people only what we can write in finite formula, and compute in finite time.
This is imv, much like teaching people what's under a street light just because everything else is in darkness.
I think, philosophically, we can build inferential telescopes that point to the vast (epistemic) blackness, inside say, a proton, or a cell, or the chaos in water.
As an ameliorative, or therapeutic project, I think people who build computational models too much should meditate on the number of protons flowing free in a drop of water, and what properties their interactions might bring about. And whether it would ever be possible to know them.
I've read this thread exchange with interest, but what about the results that quantum computers are simulatable by classical computers? See David Deutsch 1985. This would reduce the issue of infinite Hilbert spaces to simulation using quantum computers, and in turn, Deutsch's result which says classical Turing machines can actually simulate quantum computers.
You can always make local arguments that, say, some g can be substituted with some c.
The issue is broader than that. It concerns the premises of vast areas of physics -- you have to show they are more likely false than true.
This isnt an argument saying no c can be found for any given g, it's saying, "g-c gaps have empirical consequences we havent observed" and if we did, physics would be foundationally wrong
When they assert theorems like "classical TMs can simulate quantum TMs" they mean the simulation is gapless. Otherwise they use the term approximation.
Hilbert space = set of functions in Real -> Real
geometrical & non-computable = Reals
determinism = g(x, t_future) fully set by g(x, t_now) and g
if you model a geometric, g : Real -> Real with computable, c : Int -> Int then there are gaps at arbitrarily high precisions, say p (eg., p = delta(g, c) at (x, t))
construct a classical system of arbitrarily complexity (eg., 10^BIG interactions), describe each interaction with g. Since 10^BIG are required, "delta(g, c) < BIG" is required in order for the system to remain deterministic (ie., described by g). We can easily find cases where BIG > delta(g, c), so CM would be non-deterministic if g is replaced by c.
As for QM, these "gaps" are cause much deeper contradictions with premises of QM.
If you replace wavefunctions, g, with computable ones, c then they dont sum to solutions of the wave-eq, so QM fails to be linear (the detla(g,c) are massive because hibert space is infinite-dim).
Now it might be that reality is really computable in the sense that there's some c which can replace g, but this would violate the assumptions of physics and has no motivation. Physics might be wrong, but there's no evidence of that.
There are also other issues, but these are just two off the top of my head.
Refences: Look for physical church-turing, church-turing thesis, non-det and det in chaos theory, non-det in classical mechanics, physical interpretations of the reals -- this will be in postgrad work, it wont be in popsci books.