Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>> And as a bonus what one machine learns one place can be instantly added to the knowledge of the other.

That's actually one big problem with machine learning algorithms: it's not at all clear how to integrate their knowledge with that of other algorithms (and that includes different instances of the same algorithm). Such algorithms build a single model of one domain at a time, and we're talking about very strict domains.

What we're seeing lately is many teams announcing that they trained an algorithm to do this or that pretty damn amazing thing, but watch closely: how many of those announcements describe a system that can integrate its learning into a wider cognitive architecture? There's teams that trained models to recognise images, to combine images, or to map images to strings, but all these things are simple tasks, that are only useful in a very limited range of circumstances. Machine learning algorithms unfortunately are one trick ponies. They do one thing well- and that's it.

>> You can almost sense how imagination and inspiration is inside the reach of machine learning (yes there are some way to go yet).

That's an understatement- the bit about having some way to go. We're not even close, really. To train a machine learning algorithm the first thing you need is a lot of examples of the thing you want it to learn. It's really hard to see how one would compile a set of examples of imagination, not least because it's inside peoples' heads. Not to mention we don't even know what human imagination is in the first place.



> Machine learning algorithms unfortunately are one trick ponies.

Most humans only know a few of the skills that other people know. We are specialized, too.


You're talking about specialisation in a restricted field, like maths or a scientific discipline. That's part of education.

I'm talking about how all (healthy) humans learn about the world they inhabit, by building a broad context of the entities and concepts in it. We all learn to speak a language for instance, in fact I believe most people actually learn a couple. We learn to interpret facial expressions, who is our friend and who is not, how to find sustenance and so on. We learn a whole bunch of things outside of formal education and specialised technical knowledge.

We specialise even in the kind of knowledge I describe, sure, but we can also change specialisation without too much hassle. I myself have been making my living as a programmer for the past several years coming from a completely non-technical background for most of my life. It was hard going to learn a new thing from scratch, but I was perfectly able to do so. We lose this flexibility as we grow older but for most of our lives we have nothing like the limitations of machine learning.


> we have nothing like the limitations of machine learning

Well, that works both ways. Humans are quite limited, we haven't doubled our IQ in the last 1000 years. But machines have even more potential to grow than we do, in the next 50-100 years they will surely have matched our ability to adapt.

Today, a company releases a translation software for a couple of languages, next year they release translation between 100 languages. A human can't keep up with that. Yes, they still need tuning and architecture design, but that might change soon, maybe in a few years.

Also, the education of humans needs to be human supervised, but machine learning can be unsupervised (like, AlphaGo playing millions of self games to fine tune its value function), thus, cheaper. I am sure Lee Sedol needed much more energy to train to get up to that level of play. He's 33 years old, and in order to get to his level he required resources, teachers, food, etc. AlphaGo played a few million self play games and only consumed a bunch of cheap electricity, while doing it a hundred times faster and surpassing the man.


>> AlphaGo played a few million self play games and only consumed a bunch of cheap electricity,

Well, if Google has access to cheap electricity then I understand why they're so successful. Unfortunately, I think they find it as expensive as everyone else, except they have a larger budget than most and they can afford to burn it for as long as they like (well, ish).

>> I am sure Lee Sedol needed much more energy to train to get up to that level of play. He's 33 years old, and in order to get to his level he required resources, teachers, food, etc.

Sure, but in the same vein AlphaGo required the energy and combined effort of probably a few hundred thousand humans to create the infrastructure on which it runs, the factory that created its hardware, the people who invented its programming language, its algorithms and so on. If you're going to think about historical costs, then think about historical costs.

But, I'll refer you to my reply to TomPete (same level): no, runnign AlphaGo is not "cheap" in any way. There are huge costs involved, as there are for pretty much all state of the art machine learning algorithms, with deep learning topping the curve. For instance, try training an instance of AlexNet on the full ImageNet data on your hardware, with your home electricity budget.

>> next year they release translation between 100 languages.

That's not that hard to do. What's hard to do is to get good translation between those 100 languages. In practice, for all companies who do machine translation right now, translation works well between a few pairs (like three or four pairs) and the rest is only useful as entertainmnt for native speakers. I speak a few languages, French, English and Greek, and I can attest to the fact that going from or to Greek from either French or English is just hilarious, in any machine translation service I've tried, with Google Translate first, of course.

I think you're just overestimating the quality of machine translation. I'm afraid it's nowhere near as good as you think it to be.

>> Humans are quite limited, we haven't doubled our IQ in the last 1000 years.

That doesn't mean much. Even humans with a low IQ can learn to read and write, and perform all sorts of reasoning tasks that are out of the reach of all AI systems, even if those same systems can outperform every human in specific and very restricted tasks.

Again- you're overestimating AI, I'm afraid.


Why shouldn't "Machines" be able to to that? What limit to technology exactly is it you believe there is that this can't be done by machines?

Humans are wrong about a million different things, they make mistakes all the time, we are limited to how much we can take in and so on.

Humans have all sorts of limitations yet somehow we manage to do ok.


>> What limit to technology exactly is it you believe there is that this can't be done by machines?

I used the word "limitation" not "limit" and I'm talking about machine learning algorithms, not machines in general and outside of the context of machine learning.

Machine learning does have limiations compared to humans. Specifically, machine learning algorithms need large, no, vast amounts of data and processing power. You can refer to Hinton, who is on record saying deep learning took off recently thanks to more data and more processing power, which wasn't available earlier.

Humans need nothing like that. We can even learn on no examples at all. If I tell a kid who has never seen a giraffe what one looks like they'll know one when they see it. I won't even have to show the kid a giraffe and say "that's a giraffe".

That's because we have a context of the world that we can incorporate new knowledge into in the blink of an eye. That is beyond the capabilities of our most advanced algorithms right now- their context is always limited to a very restricted domain, for instance- images of specific things or a sub-language etc.

Will machines one day be able to build a context as rich as that of a human? Who knows- maybe. But we're nowhere near achieving that in this generation, or the next, or the one after that.


I think its very counterfactual to look at this from a linear perspective. Make it exponential and everything changes around it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: