> Developing education and training pipelines is wasting money if the skills you need are constantly changing! There is plenty of "slack" in the workforce so this works just fine in most cases - somebody will learn what they need to get paid. There are very few fields where qualified worker shortages are a real problem.
Here's the problem with your reasoning. This paragraph is simply wrong, with each sentence being untrue. Education and training are never wasted money, the skills aren't changing that quickly, there isn't any slack in the workforce, and qualified worker shortages are being reported in every trade across the board. Someone needs to solve the problems you hand-wave away.
> this works just fine in most cases - somebody will learn what they need to get paid.
That's me. I specialize in learning new domains. I cost like 8x more than the random junior you'd be able to hire with a functional onboarding program.
> It’s honestly a bit painful watching the AI field struggle to re-learn first principles that other disciplines have already learned.
This is my fear with software development in general. There's a hundred-year old point of view right next door that'll solve problems and I'm too incurious to see it.
I have a relative with a focus in math education that I've been stealing ideas from, and I think we'd both appreciate a look at your doc if you don't mind.
I think some of it has to do with incentives. Nobody wants to invest in a team to adapt and test other-field lessons that may come out as "there's no free lunch" or "this is equivalent to a hard problem they didn't solve there yet either."
So instead we're more likely to see navel-gazing "singularity" stories that fit with telling your investors they will become fantastically rich.
They're definitely deeply related. For example, a lot of works get rejected over "novelty" issues. Well, if success and/or failure depend on something seemingly small then it will almost never get through review because it seems like low novelty. Though it'll get through review if authors are convincing enough, which often leads to some minor exaggerations.
Combine that with the publish-or-perish paradigm and I think we got significant coverage. People don't even consider diving deeper into things and are encouraged to take the route of "assume paper is correct" because that's the fastest way to push out research. But if the foundation is shaky, then everything built on it is shaky too.
Which, that's a distinction in the hard and more formal fields like math and physics. They have no issues pushing out papers that may have errors in them because the process is to attack works as hard as possible. Then whatever is left is where you build again. You definitely have people take advantage of this, like Avi Loeb publishing about aliens, but it is realistically a small price to pay. And hey, even Loeb's work still contributes. If at some point it actually is aliens, then there's work existing that can be built upon. And when it continues to not be aliens, there's existing work to build on since really his problem is more that the papers just end up concluding "and this is why we can't rule out aliens!" (-__-)
Anyways, long story short, my advice is to just remember that you, and everybody else, is a blubbering idiot and it is a absolute fucking miracle a bunch of mostly hairless apes can even communicate, let alone postulate about the cosmos. At the end of the day we're all on the same team, seeking truth. Truth matters more than our egos and if we start to forget how dumb we are then we'll only hinder our pursuit of truth.
People have been using tobacco for many thousands of years. if they want to use it knowing full well the consequences, they should be able to. Unless we also ban things like skydiving, rock climbing, and fast cars and motorcycles, it makes no sense to me.
Why isn't prohibiting something known to cause harm a good thing? Plus, smoking doesn't just harm the individual doing it, its harm extends to those in the immediate (and sometimes not so immediate) vicinity, as well as the environment. There is literally zero good to gain from it.
Exactly, just as there are rational reasons to dislike Bayer/Monsanto's influence on the business of agriculture without being hysterical about "Frankenfood", you can be against OpenAI, etc. without thinking AI is going to destroy civilization or whatever.
How close are you to saying that a repair manual "knows" how to fix your car? I think the conversation here is really around word choice and anthropomorphization.
The problem is, people think word choice influences capabilities: when people redefine "reasoning" or "consciousness" or so on as something only the sacred human soul can do, they're not actually changing what an LLM is capable of doing, and the machine will continue generating "I can't believe it's not Reasoning™" and providing novel insights into mathematics and so forth.
Similarly, the repair manual cannot reason about novel circumstances, or apply logic to fill in gaps. LLMs quite obviously can - even if you have to reword that sentence slightly.
Here's the problem with your reasoning. This paragraph is simply wrong, with each sentence being untrue. Education and training are never wasted money, the skills aren't changing that quickly, there isn't any slack in the workforce, and qualified worker shortages are being reported in every trade across the board. Someone needs to solve the problems you hand-wave away.
> this works just fine in most cases - somebody will learn what they need to get paid.
That's me. I specialize in learning new domains. I cost like 8x more than the random junior you'd be able to hire with a functional onboarding program.
reply