No, "grass always looks greener on the other side" is a perspective thing. If you stand on your own grass then you look down onto it and see the dirt, but if you look over to the other side you see the gras from the side which makes it look more dense and hides the dirt. But it's the same boring grass everywhere. :)
At first, I thought "this is missing the point of the phrase" and moved on, but now I'm back to say it's stuck in my head and an intuitive, pretty neat way to think about it.
I think there's some goldilocks speed limit for using these tools relative to your skillset. When you're building, you forget that you're also learning - which is why I actually favour some AI code editors that aren't as powerful because it gets me to stop and think.
I think this is the wrong way to think about it. In this case the "intelligent people who are wasted on finance and ads" are drawn to high-status, low-risk, well-paid jobs, with interesting problems to solve.
If you want to solve meaningful problems you need a different kind of intelligence; you need to be open to risk, have a lot of naivety, not status orientated, and a rare ability to see the forest among the trees (i.e. an interesting problem isn't necessarily a important one).
While true, another is that “crop harvesting efficiency” and medicine are both more a biology/chemistry problem which may not interest the same people so it’s unclear they’d even attract the same thing.
It’s also missing that advancements in one field, particularly computer science, computation, and AI creates significant infrastructure that can be applied to those tasks in never before seen ways.
And finally, physical problems evolve much more slowly and is more capital intensive and requires a lot more convincing of other people. Digital problems by comparison are more “shut up I’m right, here’s the code that does X”. It’s easier to validate, easier feedback resulting in quicker mastery, etc. Not saying it’s completely bulletproof in that way, but more true than in physical sciences these days. So just throwing more people at the problem may not necessarily yield results without correct funding which historically was provided by the government (hence the huge boom in the 60s) but as the low hanging fruit were picked and government became more dysfunctional, this slowed to a crawl.
For example, personally I probably could have ended up working on fusion research if I had more economic security growing up and it felt like the nuclear industry was booming instead of constantly underdeveloped (both fission and fusion). But instead I’ve worked with computers because I felt like it was a boom segment of the economy (and it has largely been while I’ve worked) and the problems felt interesting (I’ve worked on embedded OSes, mobile OSes, ML, large distributed systems, databases, and now AI) and like there’s always interesting products to build to help improve the world.
Should we view those who chase status as a bad thing, or look to those who assign status that is then chased? If the average person cares more about who won last night's big game than some work done to improve medication, should we really have anything to say about those who decide to optimize their lives by what society actually rewards?
I noticed this way back in grade school. Good grades were, if anything, a net negative prestige, while sports were a positive prestige. It made me wonder what the school was actually optimizing for, because the day to day rewards weren't being given to the studious. (The actual reward function was more complicated, such as good grades being a boost if one was already a sports star, but these were exceptions to the norm.)
No, that's just an optimization that saved on computing resources. It effectively allows the party that runs this simulation to have a limited world to simulate. Dark matter is the other half of that trick. Both were invented by one Bebele Zropaxhodb after a particularly interesting party in the universe just above this one...
I'm becoming more convinced that this kind of rhetoric is usually peddled by individuals who haven't actually built anything notable (granted, that's most of us).
If all you're doing is using AI to build products, by definition, you're gravitating to the mean.
The AI doesn't care about a delightful product, it cares about satisfying its objective function and the deeper you go the more the two will diverge simply because building a good product is really complex and there are many many paths in the decision maze.
reply