Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We know the brain is doing something - if you don't want to call it computation, then you might as well call it magic.


Are you positing that the only alternatives are computation and magic?


Seems like a false alternative between computation and magic.


There are other possibilities. For example, there can be an immaterial mind that operates as a halting oracle and interfaces with the world through the brain. Halting oracles are well defined, and we can empirically test for their existence. So, no reason why we have to assume everything humans do is reducible to some sort of automata. The only reason we make the assumption is because of prior materialistic commitments.

UPDATE: I've been rate limited for some reason, so here is my response whether the mind intuitively seems to be a halting oracle.

1. It's obvious there are an infinite number of integers, because whatever number I think of I can add one to it. A Turing machine has to be given the axiom of infinity to make this kind of inference, it cannot derive it in any way. This intuitively looks like an example of the halting oracle at work in my mind. Or, an even more basic practical example: if I do something and it doesn't work, I try something else. Unlike the game AIs that repeatedly try to walk through walls.

2. We programmers write halting programs with great regularity. So, it seems like we are decent at solving the halting problem. Also, note that it is not necessary to solve every problem in order to be an uncomputable halting oracle. All that is necessary is being capable of solving an uncomputable subset of the halting problems. So, the fact that we cannot solve some problems does not imply we are not halting oracles.


Roger Penrose basically suggests what you say in "The Emperor's New Mind". Roughly, it says that the brain (likely, according to him) uses quantum computation, and so we can't make an AI out of a classical computer.

The practical flaw with this argument, of course, is that you could instead make an AI that itself uses quantum computation. I asked Roger Penrose about this at a university philosophy meetup over 20 years ago, and he agreed.

Likewise, if there is some kind of halting oracle, perhaps we can work out how the brain creates and connects to that oracle, and make our AI do the same.

Meanwhile, there is no physiological or computational evidence for this possibility. We should keep hunting though, as that's the same thing as understanding the detail of how the brain works!


Well, quantum computation is weaker than a nondeterministic Turing machine, so not the same thing I'm saying. Penrose correctly identifies the mind cannot be a deterministic Turing machine, but his invocation of quantum mechanics does not solve the problem he points out. A DTM can simulate an NTM and hence anything inbetween, so the inbetween of quantum computation does not solve anything.

The fundamental problem Penrose identifies boils down to the halting problem, which requires a halting oracle to be solved. Hence, a halting oracle is the best explanation for the human mind, and no form of computation, quantum or otherwise, suffices.

UPDATE:

Since I'm rate limited, here is my answer to the replier's comment:

A partial answer: the mind has access to the concept of infinity, and can identify new, consistent axioms. Other possibilities: future causality and ability to change the fundamental probability distribution.

But, it's also important to note that we don't have to answer the "how" question in order to identify halting oracles as a viable explanation. We often identify new phenomena and anomalies without being able to explain them, so the identification is a first step.


>But, it's also important to note that we don't have to answer the "how" question in order to identify halting oracles as a viable explanation. We often identify new phenomena and anomalies without being able to explain them, so the identification is a first step.

I don't think it constitutes an explanation at all, let alone a viable one, if all it does is beg the same question.

The problem was already identified: "how does human cognition work?" You've renamed it: "how does this supposed halting oracle work?" That might be an interesting framing but it is not a viable explanation of anything until you've proved that such oracles exist or in other words, solved the halting problem.


>Hence, a halting oracle is the best explanation for the human mind

What does it explain though? That the human brain has a black box capable of solving certain problems... how exactly?


Indeed - it's essentially the homunculus fallacy, or magic dressed up in the language of knowledge.

https://en.wikipedia.org/wiki/Homunculus_argument


Your theory doesn't seem falsifiable short of actually making an agi whose very definition is notoriously slippery.

You might as well just say the mind resides in the soul.


> We programmers write halting programs with great regularity.

Making any program halting program is trivial: add executed instructions counter, halt program at some value of the counter. Proving that an arbitrary program halts is an entirely different task.

> So, the fact that we cannot solve some problems does not imply we are not halting oracles.

If it's allowed to not solve some problems, then I can write such an oracle:

Run a program for a million steps. If program has halted, output "Halts", otherwise output "Don't know".

It can't solve some problems, but by your logic it doesn't imply it's not a halting oracle. You are missing something.


There doesn't seem to be much reason to challenge the assumption: humans aren't good at solving any problems that we know to be uncomputable (e.g. the halting problem). Sure, it's a thing you could investigate, but the explanation for why it's not a popular topic is that it doesn't seem like a fruitful area of research.

Also, personally what my mind is doing doesn't feel like it's invoking an oracle for my problem solving. Generally when the search space for a problem that I'm solving increases I experience the kinds of blowups in the difficulty that would arise from me following an algorithm. Now, not everybody is the same. Do you feel like your problem solving calls an oracle?


> A Turing machine has to be given the axiom of infinity to make this kind of inference, it cannot derive it in any way.

Why not? Are you aware of a proof of this? I think you are limiting the capabilities of Turing machines without evidence.

> Unlike the game AIs that repeatedly try to walk through walls.

Game AIs capabilities are a small subset of what a Turing machine can do. Most game AIs can't do speech recognition or solve math equations either.

> We programmers write halting programs with great regularity.

So do other programs. Writing a halting program is not an uncomputable problem, and doesn't require solving the halting problem.


I just want to add that materialistic commitments don't even necessarily imply computability. Entropy, for example, doesn't seem to be computable[1].

[1] https://arxiv.org/pdf/0808.1678.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: