Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It fundamentally does not matter. Matrix multiplication does not erase the truth of Godel and Turing.


Godel and Turing just proved that there are some true things that can't be proved, and things that cannot be computed. They didn't show where those boundaries are.

They certainly didn't show those boundaries to be below human cognition level.


Godel proved that there are unprovable statements. Turing showed that certain classes of problems can only be solved by machines with infinite tapes. This no bounded LLM can possibly solve every turing complete problem. Only theoretically infinite chain of thought can possibly get us that power.

Godel then tells us that, if we have such a system, there are things where this system may get stuck.

Indeed this is what we see in chain of thought models. If you give them an impossible problem they either give up or produce a seemingly infinite series of tokens before emitting the </think> tag.

Turing tells us that examining any set of matrices modeling a finite state machine over an infinite token stream is the halting problem.


Theoretical computability is of dubious practical relevance.

Consider two problems:

Problem A is not computable

Problem B is computable in principle, but, even for trivially sized inputs, the best possible algorithm requires time and/or space we’ll never have in practice, orders of magnitude too large for our physical universe

From a theoretical computer science perspective, there is a huge difference between A and B. From a practical perspective, there is none whatsoever.

The real question is “can AIs do anything humans can do?” And appealing to what Turing machines can or can’t do is irrelevant, because there are a literally infinite number of problems which a Turing machine can solve, but no human nor AI ever could


So the article is about what humans v LLMs can do, except in the article, LLM is taken to mean just a single output auto regressive model (no chain of thought). Since an LLM has a constant number of steps at each token generation, no it cannot do everything a human can. Humans can choose when to think and can ponder the next action interminably. That's my point. When we force LLMs to commit to a particular answer by forcing an output at each token generation, the class of problems they can solve is trivially less than the equivalent human.


I agree that a raw autoregressive LLM model with just a single output is (almost necessarily) less capable than humans. Not only can we ponder (chain of thought style), we also have various means available to us to check our work – e.g. for a coding problem, we can write the code, see if it compiles and runs and passes our tests, and if it doesn't, we can look at the error messages, add debugging, try some changes, and do that iteratively until we hopefully reach a solution–or else we give up – which the constraint "single output" denies.

I don't think anyone is actually expecting "AGI" to be achieved by a model labouring under such extreme limitations as a single output autoregressive LLM is. If instead we are talking about an AI agent with not just chain of thought, but also function calling to invoke various tools (including to write and run code), the ability to store and retrieve information with a RAG, etc – well, current versions of that aren't "AGI" either, but it seems much more plausible that they might eventually evolve into it.

I don't think we need to invoke Turing or Gödel in order to make the point I just made, and I think doing so is more distracting with irrelevancies than actually enlightening.


Yeah, the grounded take is that Turing and Gödel apply just as much to human intelligence. If not, someone please go ahead and use this to physically prove the existence of an immortal, hypercomputational soul.


Who is trying to “erase the truth of Gödel and Turing”? (Well, some cranks are, but I don’t think that’s who you are talking about.)

Gödel and Turing’s results do not appear to give any reason that a computer program can’t do what a person can do.


That's not the point. Computer program with a finite number of steps (an auto regressive LLM without chain of thought) has a limit in what it can reason in one step. This article does a lot of wordcelling to show this obvious point.


That seems irrelevant to Gödel? If that was your point, you should have said that rather than the things about Turing and Gödel (which leads people to expect you are talking about the halting problem and incompleteness, not the limitations that come from a limited depth circuit)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: