Yeah, thats FUD. Cloudflare hasnt called anybody demanding huge sums of cash and holding your domains hostage. As a registrar they're fine, dont play scammy scum upsell games (because they have a real business model that isnt just registration skim).
If you’re not capable of setting up DMARC correctly then it’s a safe assumption you aren’t capable of adequately securing your email server. Which is even easier to mess up with much higher consequences. Even if you are not intending to be a spammer, if your server gets pwned you will become an unwitting one.
I set up my orgs SPF/DKIM/DMARC (we self host, they have feelings about corporate data sovereignity...) it look about 30 min having never touched them before, and maybe another 15 to write an ansible playbook to rotate the keys.
We do have a _tremendous_ amount of spam fail these checks, as well as a few legitimate organizations.... Some of our peer companies have sent out notices that they will bounce anything that fail these checks in the coming years, and we're probably going to to do the same before too long.
> I pointed the AI at the tedious parts, the stuff that makes a 14k-line PR possible but no human wants to hand-write: implementing every fs method variant (sync, callback, promises), wiring up test coverage, and generating docs.
Is it slop if it is carefully calculated? I tire of hearing people use slop to mean anything AI, even when it is carefully reviewed.
Considering the many hundreds of technical comments over at the PR (https://github.com/nodejs/node/pull/61478), the 8 reviewers thanked by name in the article, and the stellar reputations of those involved, seems likely.
My mistake 19k lines. At 2 mins per line that’s (19000*2)/60/7=90 7-hour days to review it all, are you sure it was all read? I mean they couldn’t be bothered to write it, so what are the chances they read it all?
For someone’s website or one business maybe the risk is worth it, for a widely used software project that many others build on it is horrifying to see that much plausible code generated by an LLM.
I probably review about 1k LoC worth of PRs / day from my coworkers. It certainly doesn't take me 33 hours (!!) to do so, so I must be one of those rockstar 10x superhero ninja engineers I keep hearing about.
I think that goes back to whether they are programmers vs engineers.
Engineers will focus on professionalism of the end product, even if they used AI to generate most of the product.
And I'm not going by "title", but by mindset. Most of my fellow engineers are not - they are just programmers - as in, they don't care about the non-coding part of the job at all.
Depends - if it is from a human I find I can trust it a lot more. If it is large blobs from LLMs I find it takes more effort. But it was just a guess at an average to give an estimate of the effort required. I’d hope they spent more than 2 mins on some more complex bits.
Are you genuinely confident in a framework project that lands 19kloc generated PRs in one go? I’d worry about hidden security footguns if nothing else and a lot of people use this for their apps. Thankfully I don't use it, but if I did I'd find this really troubling.
It also has security implications - if this is normalised in node.js it would be very easy to slip in deniable exploits into large prs. It is IMO almost impossible to properly review a PR that big for security and correctness.
usually yes, but that's why there are tests, and there's a long road before people start depending on this code (if ever). people will try it, test it, report bugs, etc.
and it's not like super carefully written code is magically perfect. we know that djb can release things that are close to that, but almost nobody is like him at all!
I carefully review far more than 14k LoC a week… I’m sure many here do. Certainly the language you write in will greatly bloat those numbers though, and Node in particular can be fairly boilerplate heavy.
And a less incompetent government interested in protecting the environment, citizen's rights, and finite resources will have outlawed artificially locked computing machinery for the same reasons as single-use Lithium e-cigarettes.
Somebody had to die of cancer at the FAB to give you that CPU, only for the manufacturer to brick it with an eFuse N years after sale. All to protect an unsustainable business model, underpricing the hardware and rent-seeking on zero-cost distribution.
Oh and in both cases, whose rights does the DRM protect?
We'd just be overloading "lessons" as well, and even more so because it takes more work to ground the concept, given its larger semantic distance from what we're describing.
reply