(I am not an expert on anything.) One happy circumstance here is that while the RAM cartel is chasing Big AI's money today, in the medium term its self-interest probably makes it a supporter of local AI. A new, compelling reason to have 128GiB, 256GiB or more of VRAM on all your devices? You can be sure that the dollar signs are glowing in their eyes already. The less efficient use of VRAM by personal devies (any given device's VRAM will be mostly idle much of the time) tends to make it more attractive, all else being equal (though of course it isn't) compared to the centralised systems run by engineers and accountants striving all day to maximise ROI; and in any case, since the short-run supply constraints on RAM go away in the longer term, the RAM manufacturers will be able to supply both. My guess is that you can probably also also explain Apple's AI strategy (sit tight and wait for Moore's Law to make local AI more viable) and maybe even nVidia's (lay the groundwork for a gradual switch from selling shovels to the army to selling shovels at Home Depot over time, at least as a Plan B) in similar terms.
Just because we'll have to pay for the hardware, doesn't mean we'll have meaningful control. Look at what happened with phones - weak and limited slaves to the mothership, secured against pesky users with powerful encryption, yet costing more than a vastly superior laptop; quasi-mandatory platforms for highly addictive experiences, centered around the flow of information.
And now with LLMs we can create even more fabulously addictive experiences, even more finely tuned information flows, even more treacherous servants. I very much doubt that we'll be allowed full control of it all. Every effort will be spent to centralize power, and every effort will be spent to extract as much cash as possible from us for the privilege.
Phones are such a travesty because they're so incredibly overpowered. I think there's a lot of people out there where their iPhone has more compute than their laptop or desktop, but it can't do 1/10th the amount of stuff. What a waste!
They're actually underpowered because they can't sustain that full compute over time like a desktop can, or even a laptop to some extent. That's a key limitation for AI.
That policy itself seems wrong though. It seems to imply that anything someone claims about themselves on a personal blog doesn't need verification.
People often have a vested interest to lie about themselves, and often people may not remember historical details about their lives as accurately as they believe they do.
That said, "no really, I am still alive" seems like something that should be trusted as a source.
But remember: once again, don't simply get angry at Google the institution. Get angry at Page and Brin personally. They have the power to prevent this, a power they were careful to preserve when they gave Google its IPO. They are fully responsible for Google's choices here. But, partly because they aren't constantly jumping up and down drawing attention to themselves on social media, they've tended to escape the same personal scrutiny given to eg. Elon Musk. That needs to end.
As it so happens 'cara' https://en.wiktionary.org/wiki/cara#Irish is the Irish [Gaelic] word for 'friend' (probably related to words like Italian 'caro' etc.) so CARA is not a bad name for a robot dog!
All else being equal, the return of high-touch recruiting work is of course a reduction in industrial productivity and a negative contribution to economic growth. But it does generate more jobs! Put that in your predictions of AI’s economic impact and smoke it …
> All else being equal, the return of high-touch recruiting work is of course a reduction in industrial productivity and a negative contribution to economic growth. But it does generate more jobs!
When the problem definition is "companies want applicants who are known to be humans having a minimally vetted work history", Occam's Razor[0] implies people can do so efficiently. If for no other reason than it being trivial for one person to converse with another.
How would the above result in:
... of course a reduction in industrial productivity and a
negative contribution to economic growth.
If someone who contributed to the Linux kernel has their resume on par with a spammer who lied about it how do we know who is correct unless there is a verification system?
I think you have misunderstood what I was saying. I didn't compare total productivity with high-touch manual work from a recruiting agency in the actually-existing 2026 to total productivity without that same manual work in the actually-existing 2026. I agree (or certainly find it plausible) that in our actually-existing 2026 total productivity is likely higher with the extra manual work from recruiters than without it. What I compared was total productivity with high-touch manual work from a recruiting agency in the actually-existing 2026 to total productivity, without that same extra manual work, in a hypothetical 2026 where modern (GPT-4-or-better) LLMs don't exist (or at least don't exist yet). That's the relevant comparison when it comes to asking the question "what impact have LLMs had on productivity?" The actually-having-existed 2019 or 2022 are probably a decent proxy for the hypothetical 2026 here.
Put simply, organizations needing skilled personnel can delegate post-application screening to their set of approved staffing firms. Said organizations can employ initial screening techniques to filter out obvious fraud, such as requiring online applicants prove they possess the email/cell phone provided via industry standard mechanisms (while employing deny-lists as applicable). Given the remuneration commitment each hire represents year-over-year and the ability to hold staffing firms accountable, their fees are typically quite reasonable.
Why you introduced the question "what impact have LLMs had on productivity?" in this context escapes me.
* https://www.adobe.com/jp/print/postscript/pdfs/PLRM.pdf The PostScript Language Reference (third ed.—a later edition of the "Red Book") (principally) by Ed Taft, Steve Chernicoff and Caroline Rose, 1999 (ISBN 0-201-27922-8)
(At first my retro-ps tab got itself into a state in which it would not run any code entered into the Code textarea, instead timing out and returning an error; and since page reload is soft-disabled you'll have to either force a reload or open a new tab. Also, since the Abobe sample code uses indentation extensively—for example the Blue Book's official "hello world" program is
> I'd say all of those people have significantly different styles so I think Opus is relying heavily on topic and skewing towards very prolific writers in its guesses
In other words, that there's a bit of Akinator to how Claude is doing so well at identifying famous or somewhat-famous online writers. And of course it's not surprising that a machine-learning system will take every opportunity left open to it to "cheat". OTOH there are things like the "Large-scale online deanonymization with LLMs" paper https://arxiv.org/abs/2602.16800 which seem to show that current LLMs really can deanonymise many or most ordinary posters based on prose style, though I'm not able to evaluate those claims myself. Do we know whether the LLM providers have actively tried to steer their (easily-accessible) systems away from being able or being willing to do mass deanonymisation?
The Apache Foundation used to step in in this kind of situation, didn't it? Thugh maybe pgbackrest isn't quite big and official enough to be the kind of software which Apache takes on, and one certainly hears (increasing?) grumbles about Apache's stewardship.
The Apache Foundation used to help with this sort of governance problem didn't it? Thugh maybe pgbackrest isn't quite big and official enough to be the kind of software which Apache takes on, and one certainly hears (increasing?) grumbles about Apache's stewardship.
Thank you. So it's more or less running (certain specified) function calls in parallel? Sounds nice, but what happens to upward funargs? I assume first-class continuations are right out ...
reply