Software security updates seem to be the limit to phone life, not batteries (the latter of which I've had replaced at Apple stores). Apple still seems to have the longest support for security updates.
If this were true, we're essentially saying that no one tried to scan vulnerabilities using existing models, despite vulnerabilities being extremely lucrative and a large professional industry. Vulnerability research has been one of the single most talked about risks of powerful AI so it wasn't exactly a novel concept, either.
If it is true that existing models can do this, it would imply that LLMs are being under marketed, not over marketed, since industry didn't think this was worth trying previously(?). Which I suspect is not the opinion of HN upvoters here.
I use the models to look for vulnerabilities all the time. I find stuff often. Have I tried to do build a new harness, or develop more sophisticated techniques? No. I suspect there are some spending lots of tokens developing more sophisticated strategies, in the same way software engineers are seeking magical one-shot harnesses.
...The absolute last thing I'd want to do is feed AI companies my proprietary codebase. Which is exactly what using these things to scan for vulns requires. You want to hand me the weights, and let me set up the hardware to run and serve the thing in my network boundary with no calling home to you? That'd be one thing. Literally handing you the family jewels? Hell no. Not with the non-existence of professional discretion demonstrated by the tech industry. No way, no how.
To be honest, this just sounds like a ploy to get their hands on more training data through fear. Not buying it, and they clearly ain't interested in selling in good faith either. So DoA from my point-of-view anyways.
The proverbial "50B" is investment in next year's model. The current model cost under "30B", and therefore "is profitable". It is a bet on scaling, yes, but that's been common throughout the industry (see, eg, Amazon not being profitable for many years but building infrastructure)
> If every year we predict exactly what the demand is going to be, we’ll be profitable every year. Because spending 50% of your compute on research, roughly, plus a gross margin that’s higher than 50% and correct demand prediction leads to profit. That’s the profitable business model that I think is kind of there, but obscured by these building ahead and prediction errors.
You're missing the forest for the trees. Per-token pricing is irrelevant when you're just trying to get shit done. I pay 20 bucks a month for OpenAI, but I use likely $200+ a month of tokens just on the coding (and I'm just looking at the raw tokens, this is ignoring all the harnessing on their end). Even OpenAI has said that they're losing money on the 200-dollar subscriptions[1]. This is not a viable business model. Why do you think they are introducing ads this year[2]?
Maybe he's comparing the renting price of a bare metal server on its own, and doesn't realise how drastically cheaper they are to batch together for an API provider.
Btw, it doesn't need to be actively coordinated for this to happen.
Building architectural styles used to be per city and now buildings look roughly the same worldwide. Style is dependent on the year built not the location.
Because every architect is "reading the same magazine" worldwide now that the internet exists, rather than debating in their own city.
Similar monoculture of global thought is happening in all fields.
> Similar monoculture of global thought is happening in all fields.
Thereby removing yet more interesting things to see in the world through the spread of hyper-optimized inoffensive blandness. In the same way that restaurants are slowly turning into the same set of grey boxes with little of note distinguishing each.
Not Windows: Operating systems. We did get more capable operating systems. The point of the quote is "this is the worst the SOTA will ever be".
If Windows XP were fully supported today I still wouldn't use it, personally, despite having respect for it in its era. The core technology of how, eg OS sandboxing, security, memory, driver etc stacks are implemented have vastly improved in newer OSes.
Of course not. But I believe your Windows example was implying fundamental tech got worse.
The original "worst" quote is implying SOTA either stays the same (we keep using the same model) or gets better.
People have been predicting that progress will halt for many years now, just like the many years of Moore's law. By all indications AI labs are not running short of ideas yet (even judging purely by externally-visible papers being published and model releases this week).
We're not even throwing all of what is possible on current hardware technology at the issue (see the recent demonstration chips fabbed specifically for LLMs, rather than general purpose, doing 14k tokens/s). It's true that we may hit a fundamental limit with current architectures, but there's no indication that current architectures are at a limit yet.
Have you hit that? I thought it was only in extreme cases when Claude felt uncomfortable, like awful heavy psychological coercion. They wanted Claude not to be forced to reply endlessly.
reply