Honestly, I think it's great that you could get the thing you wanted done.
Consider this, though: Your anecdote has nothing to do with software engineering (or an engineering mindset). No measurements were done, no technical aspects were taken into consideration (you readily admit that you lack the knowledge to do that), you're not expecting to maintain it or seemingly to further develop it much.
The above situation has never actually been hard; the thing you made is trivial to someone who knows the basics of a small set of things. LLMs (not Claude Code) have made this doable for someone who knows none of the things and that's very cool.
But all of this really doesn't mean anything for solutions to more complex problems where more knowledge is required, or solutions that don't even really exist yet, or something that people pay for, or things that are expected to be worked on continuously over time, perhaps by multiple people.
When people decry vibecoding as being moronic, the subtext really is (or should be) that they're not really talking to you; they're talking to people who are delivering things that people are expected to pay for, or rely on as part of their workflow, and people who otherwise act like their output/product is good when it's clearly a mess in terms of UX.
I get what you're saying, but imagine a CTO/CIO who's never been very technical. The world is full of them. They vibe up an app, and think it's easy. They don't have the developer experience to know the things they're missing.
While I downplayed my job experience, I'm very in touch with developers and their workflows; the challenges they face. And I'm scared because they won't be making these decisions about LLM usage; their bosses, the guy who vibe coded a dumb app over the weekend will.
Maybe I misunderstood the purpose of your post...? It seemed to me like you were arguing "Hey, what about me? Why shouldn't I vibecode since it enables me to do things that I couldn't before?" and that's what I wrote my comment addressing.
I completely agree that people are going to be forced into using things that basically do not really work for anything non-trivial without massive handholding, and they will be forced to use those things by people who are out of touch and are mostly setting up to eventually get rid of as many people as they possibly can.
I (like many on HN I'm sure) have been continually pestered by management to use AI like it's some cure for polio. They just want to tell their VP that "my team is accelerating its use of AI!" so that the VP can pass that up the food chain. Same with when we started to migrate (unnecessarily imho) to the cloud. Just another checkbox and an attaboy from senior management.
There's really not much of a place for AI in my work. We're not cutting edge, we're just a large, safe business protected by a regulatory moat. We don't want to be on the cutting edge, since the bleeding is bad for profits and reputation. But the incentives our IT execs operate under is all about resume/credential building and moving on to bigger things. Our C level officers are not even slightly technical, so they defer to the CIO. Nothing new at all in this company, it's a story told a thousand times.
So I was just very curious how it would be to approach vibe coding as if I was my VP. You don't know what you don't know, right? And the ease at creating a simple app that would be beyond 99% of the people in my company gives way too much confidence. And with misplaced confidence comes poor decision-making.
I can see where someone who currently is an Excel jockey would benefit from some of this stuff. As long as they can compare and test the outputs. But the danger from false confidence has to be an institutional risk that's being ignored.
Do you not think that ~400k lines of code for something as trivial as Claude Code is a great indication that there is an immense amount of bloat and stacking of overwrought, poor "choices" by LLMs in there? Do you not encounter this when using LLMs for programming yourself?
I routinely write my own solutions in parallel to LLM-implemented features from varying degrees of thorough specs and the bloat has never been less than 2x my solution, and I have yet to find any bloat in there that would cover more ground in terms of reliability, robustness, and so on. The biggest bloat factor I've found so far was 6x of my implementation.
I don't know, it's hard to read your post and not feel like you're being a bit obtuse. You've been doing this enough to understand just how bad code gets when you vibecode, or even how much nonsense tends to get tacked onto a PR if someone generates from spec. Surely you can do better than an LLM when you write code yourself? If you can, I'm not sure why your question even needs to be asked.
> Do you not think that ~400k lines of code for something as trivial as Claude Code is a great indication that there is an immense amount of bloat and stacking of overwrought, poor "choices" by LLMs in there?
I certainly wouldn't call Claude Code "trivial" - it's by far the most sophisticated TUI app I've ever interacted with. I can drag images onto it, it runs multiple sub-agents all updating their status rows at the same time, and even before the source code leaked I knew there was a ton of sophistication in terms of prompting under the hood because I'd intercepted the network traffic to see what it was doing.
If it was a million+ lines of code I'd be a little suspicious, but a few hundred thousand lines feels credible to me.
> Surely you can do better than an LLM when you write code yourself?
It takes me a solid day to write 100 lines of well designed, well tested code - and I'm pretty fast. Working with an LLM (and telling it what I want it to do) I can get that exact same level of quality in more like 30 minutes.
And because it's so much faster, the code I produce is better - because if I spot a small but tedious improvement I apply that improvement. Normally I would weigh that up against my other priorities and often choose not to do it.
So no, I can't do better that an LLM when I'm writing code by hand.
That said: I expect there are all sorts of crufty corners of Claude Code given the rate at which they've been shipping features and the intense competition in their space. I expect they've optimized for speed-of-shipping over quality-of-code, especially given their confidence that they can pay down technical debt fast in the future.
The fact that it works so well (I get occasional glitches but mostly I use it non-stop every day and it all works fine) tells me that the product is good quality, whether or not the lines of code underneath it are pristine.
> I certainly wouldn't call Claude Code "trivial" - it's by far the most sophisticated TUI app I've ever interacted with.
I'll be honest, I think we just come to this from very different perspectives in that case. Agents are trivial, and I haven't seen anything in Claude Code that indicated to me that it was solving any hard problems, and certainly not solving problems in a particularly good way.
I create custom 3D engines from scratch for work and I honestly think those are pretty simple and straight forward; it's certainly not complicated and it's a lot simpler than people make it out to be... But if Claude Code is "not trivial", and even "sophisticated" I don't even know what to classify 3D engines as.
This is not some "Everything that's not what I do is probably super simple" rant, by the way. I've worked with distributed systems, web backend & frontend and more, and there are many non-trivial things in those sub-industries. I'm also aware of this bias towards thinking what other people do is trivial. The Claude Code TUI (and what it does as an agent) is not such a thing.
> So no, I can't do better that an LLM when I'm writing code by hand.
Again, I just think we come at this from very different positions in software development.
> I think the same is currently happening with coding, except it will allow single builders and designers to do the same thing as an entire team 5 years ago.
This part of your post I think signals that you are either very new or haven't been paying attention; single developers were outperforming entire teams on the regular long before LLMs were a thing in software development, and they still are. This isn't because they're geniuses, but rather because you don't get any meaningful speedup out of adding team members.
I've always personally thought there is a sweet spot at about 3 programmers where you still might see development velocity increase, but that's probably wrong and I just prefer it to not feel too lonely.
In any case teams are not there to speed anything up, and anyone who thinks they are is a moron. Many, many people in management are morons.
You’re absolutely right! HN isn’t just LLM-infested hellscape, it’s a completely new paradigm of machine assisted chocolate-infused information generation.
I agree entirely with your statement that structure makes things easier for both LLMs and humans, but I'd gently push back on the mutation. Exactly as mutation is fine for humans it also seems to be fine for LLMs in that structured mutation (we know what we can change, where we can change it and to what) works just fine.
Your example with the dataframes is completely unstructured mutation typical of a dynamic language and its sensibilities.
I know from experience that none of the modern models (even cheap ones) have issues dealing with global or near-global state and mutating it, even navigating mutexes/mutices, conds, and so on.
> he'll likely go more and more towards vibecoding again
I think "more and more" is doing some very heavy lifting here. On the surface it reads like "a lot" to many people, I think, which is why this is hard to read without cringing a bit. Read like that it comes off as "It's very addictive and eventually you get lulled into accepting nonsense again, except I haven't realized that's what's happening".
But the truth is that this comment really relies entirely on what "more and more" means here.
I agree on that part as well, but saying that AI will go back at what it was before ChatGPT came along is false. LLM will still be a standalone product and will be taken for granted. People will (maybe? hopefully?) eventually learn to use them properly and not generate tons of slop for the sake of using AI. Many "AI companies" will disappear from the face of Earth. But our reality has changed.
LLMs will not be just a standalone product. The models will continue to get embedded deep into software stacks, as they're already being today. For example, if you're using a relatively modern smartphone, you have a bunch of transformer models powering local inference for things like image recognition and classification, segmentation, autocomplete, typing suggestions, search suggestions, etc. If you're using Firefox and opted into it, you have local models used to e.g. summarize contents of a page when you long-click on a link. Etc.
LLMs are "little people on a chip", a new kind of component, capable of general problem-solving. They can be tuned and trimmed to specialize in specific classes of problems, at great reduction of size and compute requirements. The big models will be around as part of user interface, but small models are going to be increasingly showing up everywhere in computational paths, as we test out and try new use cases. There's so many low-hanging fruits to pick, we're still going to be seeing massive transformations in our computing experience, even if new model R&D stalled today.
That's weird, because every time I see someone even talking positively about Claude Code they always seem to mention they're hitting their 5 hour limits in 2-3 hours all the time, they're hitting their overall limits all the time, and so on.
Meanwhile I can't even seem to spend my $20 Cursor Composer 2 tokens using their agent. I've been doing useless shit just to see how much usage I can cram in there and it'd probably take 10 hours of vibecoding like a loser every day to hit the limits at this point.
With that said I'm not going to pay for something that doesn't allow me to use whatever I want to use (in terms of harness, etc.), so both Anthropic (who were already disqualified because of their ridiculous limits) and Cursor is out (AFAIK you can't an agent other than their `agent` binary without some ridiculous hack like proxying all of the calls through `agent`.
I can't imagine all of the providers pretending their agents are real value going forward, but even if they do there's still stuff like OpenRouter which doesn't give a shit, may as well use something like that.
> Sacrifice civilian infrastructure as the only viable bombing target.
I'm imagining the air crew going "Huh, there are no clear actual targets to bomb. Hey, Cleetus, command won't be happy about us not bombing anything at all, retarget on that school over there, let's get this over with and go home."
Consider this, though: Your anecdote has nothing to do with software engineering (or an engineering mindset). No measurements were done, no technical aspects were taken into consideration (you readily admit that you lack the knowledge to do that), you're not expecting to maintain it or seemingly to further develop it much.
The above situation has never actually been hard; the thing you made is trivial to someone who knows the basics of a small set of things. LLMs (not Claude Code) have made this doable for someone who knows none of the things and that's very cool.
But all of this really doesn't mean anything for solutions to more complex problems where more knowledge is required, or solutions that don't even really exist yet, or something that people pay for, or things that are expected to be worked on continuously over time, perhaps by multiple people.
When people decry vibecoding as being moronic, the subtext really is (or should be) that they're not really talking to you; they're talking to people who are delivering things that people are expected to pay for, or rely on as part of their workflow, and people who otherwise act like their output/product is good when it's clearly a mess in terms of UX.
reply