Maaaaybe. I tend to think that symbolic reasoning is a learning tool, rather than a goalpost for general intelligence. For example, we use symbolic reasoning quite extensively when learning to read a new language, but once fluent can rely on something closer to raw processing - no more reading and sounding out character sequences. Similarly with chess - eventually we have good mnemonics for what make good plays, and can play blitz reasonably well.
And - let's be real - a lot of human symbolic reasoning actually happens outside of the brain, on paper or computer screens. We painstakingly learn relatively simple transformations and feedback loops for manipulating this external memory, and then bootstrap it into short-term reaction via lots of practice.
I tend to think that the problems are:
a) Tightly defined / domain-specific loss functions. If all I ever do is ask you to identify pictures of bananas, you'll never get around to writing the great american novel. And we don't know how to train the kinds of adaptive or free form loss functions that would get us away from these domain-specific losses.
b) Similarly, I have a soft-spot for the view that a mind is only as good as its set of inputs. We currently mostly build models that are only receptive (image, sound) or generative. Reinforcement learning is getting progress on feedback loops, but I have the sense that there's still a long way to go.
c) I have the feeling that there's still a long way to go in understanding how to deal with time...
d) As great as LSTMs are, there still seems to be some shortcoming in how to incorporate memory into networks. LSTMs seem to give a decent approximation of short-term memory, but still seems far from great. This might be the key to symbolic reasoning, though.
Writing all that down, I gotta say I agree fundamentally with the DeepMind research priorities on reinforcement learning and multi-modal models.
Once someone is fluent in a language, the logical operations and judgements involved stop being overt and highly visible to the conscious mind. But that doesn't mean that one stops getting the benefits and results of logical operations.
What you might see as logical operations "not mattering", I would see as logical operations integrated so deeply into reflexive operations that it's hard to see where one ends and the other begins. The contrast is that humans can do pattern recognition in a neural net fashion, taking something like the multidimensional average of a set of things. But a human can also receive a language-level input that some characteristic is or isn't important for recognizing a given thing and incorporate that input into their broad-average concepts. That kind of thing can't be done by deep learning currently - well, not a non-kludgey sort of way.
Similarly, I have a soft-spot for the view that a mind is only as good as its set of inputs.
It depends on how you want to mean that. A human can take inputs on one thing and apply them seamlessly to another thing. Neural nets tend to be very dependent on the task-focused content fed them.
I think "a better set of inputs" is the real world or much better simulators to train our RL agents. François Chollet (author of Keras) was saying a similar thing - focusing too much on architectures and algorithms we forget the importance of the environment, an agent can only become as smart as the hardest problem it has to solve in its environment, and depends on the richness of said environment for learning. Humans are not general intelligences either, we're just good at being human (surviving in our environment). We'd be much smarter in a richer environment, too.
There's a parallel between something being logical and it "feeling right" without a necessary connection at the "implementation level" between the two, just like there may be a parallel between an artificial NN recognizer recognizing something unambiguously and not caught awkwardly with multiple weak or conflicting activations, and a logical system using rules to determine a contradiction, without ever needing to embed the second in the first, however deep - it's just that illogical inputs didn't get good training because they either don't happen or have no meaningful training data.
I, personally, just know I don't use logical rules very often at all. Usually I apply them retroactively as a post-hoc justification, or narrative, to explain a sense of discomfort or internal conflict or dissonance, but I have no way of knowing if my rationale is true other than how it makes me feel - I'm simply relying on the same mechanism, with an extra set of pattern recognition learned specifically to identify fallacies and incorrect logical constructs. If I didn't have that extra training, my explanations could be illogical and I'd be none the wiser.
I think humans are very bad at logical reasoning and very inefficient at it. Only a small % of the population ever does it and they usually do it incorrectly with biases, constructively to justify an already held conclusion. They're great at pattern recognition though. I don't think logical reasoning is anywhere on the critical path to human level AGI at a deep level. It could very well be a parallel system though to help train recognition if we don't figure out better ways of doing that.
Well, neural nets and similar things laughably worse than AI systems when confronted with "real world" situation.
I wouldn't argue with the point that humans use rigorous logic and overt rules-based behavior much less than they imagine (your summary is very much a summary of the other-NLP model of mind, which I know).
I'd argue that while "refined" logic, systematic logic, might be rare, fairly crude logic, more or less indistinguishable from simply using language, is everywhere and it an incredibly powerful tool that human have. Again, being able to correct object recognition based on things people tell you is an incredibly powerful thing. You don't need a lot of full rationality for this but it gets you a lot. And that's just a small-ish example.
Intelegence is not limited to what Humans are good at. People are really bad at several tasks where current AI tech excels, but those things tend to be excluded from the conversation.
AGI that is as smart as say a rat would easily qualify as AGI even without language skills.
Intelligence is not limited to what Humans are good at.
Being able to implement all the things human are good at, however, should be able to get us everything that we could do, because anything we could create, it could create too.
AGI that is as smart as say a rat would easily qualify as AGI even without language skills.
Indeed, but while a full language-using AI is ways a way at least, using language is one thing that's at sort-of describable/comprehensible as a goal. A rat is a lot more robust than any human made robot but how? Overall, I keep hearing these "there's intelligence that's totally unlike what we conceive" argument but it seems like computer programs as they exist now either do what a human could do rationally and more quickly (a conventional program) or heuristic duplicate human surface behavior (neural nets). You could sort-of argue for more but it's a bit tenuous. Human behavior is very flexible already (that's the point, right). And assuming AI is hard to create, creating something who properties we to-some-extent understand is more like than creating the wild unknown AI.
Also, "Getting to rat level" might not be the useful path to AGI. If we simply created a rat like thing, we might win the prize of "real AGI" but it would be far less useful than something we could tell what to do the way we tell humans what to do.
A rat can do something else that a neural net can't - it is a self replicator. Our neural nets don't have self replication or a huge, complex environment and timescale to evolve in. Self replication creates an internal goal for agents: survival. This drives learning. Instead, we just train agents with human-made rewards signals. Even a simple environment, like the Go board, when used for training many generations of agents in self-play, easily leads to super-human intelligence. We don't have the equivalent simulator for the real world environment or care to let loose billions of self replicating AI agents in the real world.
"Similarly with chess - eventually we have good mnemonics for what make good plays, and can play blitz reasonably well."
"let's be real - a lot of human symbolic reasoning actually happens outside of the brain"
I was a chess master at age 10. Let's be real - when I play blitz and bullet chess, I am performing multi-level symbolic reasoning at multiple frames per second. In my brain.
I am not an alien. I can do these kinds of symbolic calculations faster than 99.6% of the population mainly because I learned chess as a kid, making it a "native language", and I got good at it early so I spent much of my youth training my neurons with this perceptual task.
My point is not to claim I'm a genius. There are dozens of players who can school me in bullet the way I can school most people.
My point is that human beings DO symbolic reasoning, it is the core of our intelligence. Being able to take in different kinds of input, organize some of them into relevant higher level clusters, sort the clusters by priority, make a plan to deal with the highest prio clusters, act, rinse and repeat.
Humans simply do not have the computational ability to make decisions based on raw perceptual data in real time. Our brains are designed to act on higher levels of symbolic meaning, and we have perceptual layers to help us turn reality into manageable chunks.
As a layman, is this just saying we learn by training an intuition of what's what/what's correct, rather than actually calculating deep reality/referencing our entire memory set every time we intake some information or need to solve a task problem? Meaning, we develop
tons of rules/heuristics after repeated pattern exposures, and use the simplified rules rather than a deep theory or 'brute-forcing' every possibility until we find one that's right. For example with chess, we don't know at the deepest possible level why this move might be best, it just feels right due to a massive learned intuition.
If that's the case, then to me it seems like AGI is limited by the amount and type of data a NN can be fed. To have an intelligence like homo sapiens, wouldnt you expect that no matter the underlying NN, it has to take in a comparable amount of data to what the 5+ human senses take in over lifetime, plus the actual internal 'learning' (i.e pattern recognition, heuristics, and intuition) + some kind of meta awareness (consciousness) to speed up and aid this process + dedicated pieces of the brain such as Broca's/Wernicke's
AI is a confused soup of more or less (un)related concepts: agency, sentience, pattern recognition, unsupervised learning, embodiment, NLP, and goal selection - among others.
IMO the minimal useful definition of AGI would list a set of testable skills that would qualify as AGI, and a more useful definition would be based on quantifiable skill sets that would allow numerical comparisons between humans and AIs.
It seems pointless to speculate when AGI might be a reality when we have only the fuzziest idea what AGI is supposed to look like.
I put symbolic reasoning at the spotlight for it is something that NN is particularly bad at: discrete data, hard to design, often approximate and non-differentiable measurement.
The problem is so inherently hard that we are struggling even to come up with a meaningful task, telling us how bad we are doing. That comes to your first point, I think finding the right loss function is a like a chicken-and-egg situation here. When you have the loss function at hand, you already what task and problem you are going to solve, then it becomes easier. But that is apparently not our current situation.
That is why I think DeepMind has a good reason to go after reinforcement learning, after all, that is how we human are trained, through exams and the feedbacks.
As to your point about LSTM, I am not very passionate to qualitatively claim it whether it can/can't handle short/long term memory. That is apparently task dependent, and all the concepts involved are ill-defined.
And - let's be real - a lot of human symbolic reasoning actually happens outside of the brain, on paper or computer screens. We painstakingly learn relatively simple transformations and feedback loops for manipulating this external memory, and then bootstrap it into short-term reaction via lots of practice.
I tend to think that the problems are: a) Tightly defined / domain-specific loss functions. If all I ever do is ask you to identify pictures of bananas, you'll never get around to writing the great american novel. And we don't know how to train the kinds of adaptive or free form loss functions that would get us away from these domain-specific losses.
b) Similarly, I have a soft-spot for the view that a mind is only as good as its set of inputs. We currently mostly build models that are only receptive (image, sound) or generative. Reinforcement learning is getting progress on feedback loops, but I have the sense that there's still a long way to go.
c) I have the feeling that there's still a long way to go in understanding how to deal with time...
d) As great as LSTMs are, there still seems to be some shortcoming in how to incorporate memory into networks. LSTMs seem to give a decent approximation of short-term memory, but still seems far from great. This might be the key to symbolic reasoning, though.
Writing all that down, I gotta say I agree fundamentally with the DeepMind research priorities on reinforcement learning and multi-modal models.