"While engaged in a survey of China, the baron was charged with dreaming up a route for a railway linking Berlin to Beijing. This he named die Seidenstrassen, the Silk Roads. It was not until 1938 that the term Silk Road appeared in English, as the title of a popular book by a Nazi-sympathising Swedish explorer, Sven Hedin."
I agree that the article is a poor take on AI in programming. However, I wouldn't blame NYT for corrupt journalism. This is an op-ed, not something written by NYT staff.
> it could also be that these software jobs won’t pay as well as in the past, because, of course, the jobs aren’t as hard as they used to be. Acquiring the skills isn’t as challenging.
This sounds opposite to what the article said earlier: newbies aren’t able to get as much use out of these coding agents as the more experienced programmers do.
NYT has it out for digital advertisers, who directly compete with them. I do sense some schadenfreude here that the tech nerds who work at these places might be in trouble.
"Silicon Valley panjandrums spent the 2010s lecturing American workers in dying industries that they needed to “learn to code."
To copywriters at the NYT, LLMs are far better at stringing together natural language prose than large amounts of valid software. Get ready to supervise LLMs all day if you're not already.
The code is also recognizable as slop to those who know how. Not the tropey "Not X, but Y" kind that's super easy to spot. But tons of repetition, deeply nested code, etc.
A counterpoint is that (maybe) nobody cares if the code is understandable, clean and maintainable. But NYT is explicitly in the business of selling ads surrounded by cheap copy just good enough to attract eyeballs. I suspect getting LLMs to write that is going to be far easier than getting LLMs to maintain large code bases autonomously.
If you explicitly make it go over the code file by file to clean up, fix duplication and refactor, it'll look much better, while no amount of "fix this slop" prompting can fix AI prose.
> no amount of "fix this slop" prompting can fix AI prose
What's the proof for that? What fundamental limitation of these large language models makes them unable to produce natural language? A lot of people see the high likelihood of ever increasing amounts of generated, no-effort content on the web as a real threat. You're saying that's impossible.
>What fundamental limitation of these large language models makes them unable to produce natural language?
LLMs can get indefinitely good at coding problems by training in a reinforcement learning loop on randomly generated coding problems with compiler/unit tests to verify correctness. On the other hand, there's no way to automatically generate a "human thinks this looks like slop" signal; it fundamentally requires human time, severely limiting throughput compared to fully automatable training signals.
Not the person you're responding to, but I think they mean sexps as in S-expressions [1]. These are used in all kinds of programming, and they have been used inside protocols for markup, as in the email protocol IMAP.
Yes. Not quite a decade before JSON and YAML, what's at hand for a human-readable interchange format for nested data? SGML (no XML yet), something FORTH-ish, make up your own thing, and...? Contemporary WAIS (search as a distinct non-HTTP protocol) shrugged off human-readable, and tried nightmarish binary ASN.1.
"While engaged in a survey of China, the baron was charged with dreaming up a route for a railway linking Berlin to Beijing. This he named die Seidenstrassen, the Silk Roads. It was not until 1938 that the term Silk Road appeared in English, as the title of a popular book by a Nazi-sympathising Swedish explorer, Sven Hedin."
-- historian William Dalrymple
https://www.theguardian.com/artanddesign/2024/oct/06/the-sil...
reply