Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>I write detailed specs. Multifile with example code. In markdown. Then hand over to Claude Sonnet. With hard requirements listed, I found out that the generated code missed requirements, had duplicate code or even unnecessary code wrangling data (mapping objects into new objects of narrower types when won't be needed) along with tests that fake and work around to pass.

Stop doing that. Micromanage it instead. Don't give it the specs for the system, design the system yourself (can use it for help doing that), inform it of the general design, but then give it tasks, ONE BY ONE, to do for fleshing it out. Approve each one, ask for corrections if needed, go to the next.

Still faster than writing each of those parts yourself (a few minutes instead of multiple hours), but much more accurate.

 help



Might as well just write the code yourself at that point. And as a bonus, end up with a much better understanding of the codebase (and way better code)

>Might as well just write the code yourself at that point

"We have this thing that can speed your code writing 10x"

"If it isn't 1000x and it doesn't give me a turnkey end to end product might as well write the whole thing myself"

People have forgotten balance. Which is funny, because the inability of the AI to just do the whole thing end to end correctly is what stands between 10 developers having a job versus 1 developer having a job telling 10 or 20 agents what to do end to end and collecting the full results in a few hours.

And if you do it the way I describe you get to both use AI, AND have "a much better understanding of the codebase (and way better code)".


Writing the code is usually not the bottleneck, so you don’t gain that much speeding it up. And as I said, you lose a lot of knowledge about the code when you don’t write it yourself.

Unless coding is most of your job, which is rare, you’re giving up really knowing what your software does in order to achieve a very minor speed up. Just to end up having to spend way more time later trying to understand the AI generated code when inevitably something breaks.

> And if you do it the way I describe you get to both use AI, AND have "a much better understanding of the codebase (and way better code)".

Using AI is not a goal in itself, so I don’t care about “getting to use AI”. I care about doing my job as efficiently as possible, considering all parts of my job, not just coding.


>Writing the code is usually not the bottleneck

I hear this repeated often and it's false.

Writing the code is A bottleneck. Except if by "writing the code" people just mean the mere physical act of typing it in. Which is not what I mean.

But if someone thinks that just the design/architecture decisions take time, and the fleshing out in actual code does not, they're wrong.

Some coders seem to think they're high end architects, and the fleshing out the design is a triviallity that's very fast. Wathing high end coders wrote, e.g. in code session streams or just someone at your office, will show you it's never that fast.

In actual programming practice, even if you know the design end to end, even if it's a 100-line thing, writing it takes time.

Look up how to call those APIs you need. Debug when you inevitable got some of them wrong. Figure out that regex you need to write. Fix the 2-3 things you do wrong on the first pass of the "trivial" algorithm. Add some logic to catch and report errors and handle edge cases. Add tests.

All these are "trivial", but combined can take a couple of hours for something the AI will most of the time spit out correct the first time in a minute. And of course as you write you also explore dozens of decisions that could go either way, even with the same exact design and external interface to your code.

Getting that ready from the LLM within a minute means you can explore alternative designs, handle new issues that occured, add more functionality to make it more usable and smarter , etc, all the while you'd still be writing the original cruder version.

>Using AI is not a goal in itself, so I don’t care about “getting to use AI”. I care about doing my job as efficiently as possible, considering all parts of my job, not just coding.

Not the point. Nobody said AI is a goal in itself.

AI however does speed up the work, and if you take the black-and-white "if AI can't do it all by itself end-to-end without me intervening then I'd rather write everything myself" (what I respond to), then you're not doing your job "as efficiently as possible".


The goalposts move every month. We’re at the stage where handing an entire specification to a mid-tier AI and walking away while it does all the work and then being disappointed that it wasn’t perfect means it’s useless.


If I still have to do a ton of work to clean up whatever the AI shits out then it might as well have done nothing. The promise of these systems from the hypesters is that it can do everything, so don't be surprised when people expect exactly that.

>If I still have to do a ton of work to clean up whatever the AI shits out then it might as well have done nothing.

Either you find what AI produces is in general "shit" (which is not realistic to think for latest LLMs, but ok).

Or you take a knee-jerk all or nothing black and white attitude to it.

"If you have to do a ton of work"? Is that work much less than what you'd have done without any AI assistance?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: