I second this, though its a bit unclear to any non-domain expert in systems or systems organization.
Defining the problem and identifying constraints is always the hardest part and its always different for each application. You also never know what you don't know when a project starts.
The process is inevitably a constant feedback system of discovery, testing, and discarding irrational or irrelevent results until you get to first principles or requirements needed.
Computers as a general rule can't do this as the lowest part of von-neumann architecture can't tell truth from falsity when the inputs are the same (i.e. determinism as a property is broken). You have automation break in similar ways.
Approximations which are what the encoded weights are, are just that, approximations, without thought process. While you can make a very convincing simulacra, you'll never get a true expert, and the process is not a net benefit overall since you end up creating cascading problems later that cannot be solved.
Put another way, when there is no economic incentive to become a true expert in the first place, and this is only done through working the problems, the knowledge is not passed on, and then lost when the experts age and die.
Since you at best may only be able to target what amounts to entry-level position roles, and these roles are what people use to become experts, any adoption replacing these workers guarantees this ultimately destructive outcome with any haphazard attempt. Even if you can't even meet that level of production initially, the mere claim is sufficient to cause damage to society as a whole. It more often then not fundamentally breaks the social contract in uncontrollable ways.
The article takes the approach of leveraging domain experts, most likely copying them in various ways, but if we're being real, that is doomed to failure too for a number of reasons that are much to long to go into here.
Needless to say, true domain experts, and really any rational person won't knowingly volunteer anything related to their given profession that will be used to economically destroy their future prospects. When they find out after-the-fact, they stop contributing or volunteering completely, as seen on reddit. These people are also more likely to sabotage these systems in subtle ways.
This dynamic may also cause the exact opposite, where the truly gifted leave the profession entirely and you get extreme brain drain, like depicted in Atlas Shrugged.
People can and do go on strike, withdrawing the only thing of value they have that cannot be taken. We are already seeing the beginning of this type of fallout in the Tech sector. August unemployment for Tech was 7%?, national unemployment was 1.5%, that's 4.6x the national average, and this is at peak seasonal hiring (with Nov-Mar often being hiring freezes). Tech historically has not been impacted by interest rate increases, its been bulletproof related to rate increases so the underlying cause is not interest rates (as some claim). The only recent change big enough to cause a splash publicly is AI, which is a pandora's box.
When employers cannot differentiate the gifted from the non-gifted, there is no work for the intelligent, and these people always have more options than others. They'll leave their chosen profession if they can't find work, and will be unlikely to return to it even if things turn around later.
Intelligent people always ask the question about should they be doing something, whereas evil (destructive/blind) people focus on can they do something.
The main difference is a focus on controlling the consequences of their actions so they don't destroy their children's future.
Defining the problem and identifying constraints is always the hardest part and its always different for each application. You also never know what you don't know when a project starts.
The process is inevitably a constant feedback system of discovery, testing, and discarding irrational or irrelevent results until you get to first principles or requirements needed.
Computers as a general rule can't do this as the lowest part of von-neumann architecture can't tell truth from falsity when the inputs are the same (i.e. determinism as a property is broken). You have automation break in similar ways.
Approximations which are what the encoded weights are, are just that, approximations, without thought process. While you can make a very convincing simulacra, you'll never get a true expert, and the process is not a net benefit overall since you end up creating cascading problems later that cannot be solved.
Put another way, when there is no economic incentive to become a true expert in the first place, and this is only done through working the problems, the knowledge is not passed on, and then lost when the experts age and die.
Since you at best may only be able to target what amounts to entry-level position roles, and these roles are what people use to become experts, any adoption replacing these workers guarantees this ultimately destructive outcome with any haphazard attempt. Even if you can't even meet that level of production initially, the mere claim is sufficient to cause damage to society as a whole. It more often then not fundamentally breaks the social contract in uncontrollable ways.
The article takes the approach of leveraging domain experts, most likely copying them in various ways, but if we're being real, that is doomed to failure too for a number of reasons that are much to long to go into here.
Needless to say, true domain experts, and really any rational person won't knowingly volunteer anything related to their given profession that will be used to economically destroy their future prospects. When they find out after-the-fact, they stop contributing or volunteering completely, as seen on reddit. These people are also more likely to sabotage these systems in subtle ways.
This dynamic may also cause the exact opposite, where the truly gifted leave the profession entirely and you get extreme brain drain, like depicted in Atlas Shrugged.
People can and do go on strike, withdrawing the only thing of value they have that cannot be taken. We are already seeing the beginning of this type of fallout in the Tech sector. August unemployment for Tech was 7%?, national unemployment was 1.5%, that's 4.6x the national average, and this is at peak seasonal hiring (with Nov-Mar often being hiring freezes). Tech historically has not been impacted by interest rate increases, its been bulletproof related to rate increases so the underlying cause is not interest rates (as some claim). The only recent change big enough to cause a splash publicly is AI, which is a pandora's box.
When employers cannot differentiate the gifted from the non-gifted, there is no work for the intelligent, and these people always have more options than others. They'll leave their chosen profession if they can't find work, and will be unlikely to return to it even if things turn around later.
Intelligent people always ask the question about should they be doing something, whereas evil (destructive/blind) people focus on can they do something.
The main difference is a focus on controlling the consequences of their actions so they don't destroy their children's future.