Mostly yes — they were written by Claude/Gemini/ChatGPT, but they've already caught several regressions introduced by the models during later refactors; there’s a fascinating loop in letting one iteration of an LLM audit the next via a static test suite.
I have never had it automate any tedium for me. Because if I didn't write it, I need to scrutinize all of it, but without the benefit of a prebuilt mental model. Reading through tons of similar looking code (because it's supposed to automate boilerplate) looking for subtle mistakes is mind numbingly boring. It's like looking at a wall full of periods in font size 8, and trying to find the one comma.
what i really hate is the "remind me later" buttons. I want to say a plain "no" but the app won't let me. It promises not to respect my decision right there in the popup itself!
GPUs in your average home PC has a longer lifespan. Datacenters run them at full load for very long periods of time. Some datacenters literally burn through hundreds of GPUs a day.
What will happen is that new buzzwords will be invented, and a new fad will take its place. And we will be stuck with the short end of the stick again. You can hope, but shit doesn't really get cheaper for us common folk, ever. :/
most of today's problem's in this field is because upper management got swindled into thinking that the process doesn't matter, as long as something comes out the other end. Doesn't need to work properly.
But this shitty state of software nowadays is mostly due to only caring about the result and not the process.
To be clear: this existed even before AI, and also led to the proliferation of electron and its ilk.
reply