Hacker Newsnew | past | comments | ask | show | jobs | submit | AstroBen's commentslogin

> skimming through an alien looking codebase, scratching your head trying to figure what crazy abstraction the last person who touched this code had in mind. Oh shit it was me? That made so much more sense back then

This is exactly how you learn to create better abstractions and write clear code that future you will understand.


I wish more was being invested in AI autocomplete workflows. That was a nice middle-ground.

But yeah my hunch is "the old way" - although not sure we can even call it that - is likely still on par with an "agentic" workflow if you view it through a wider lens. You retain much better knowledge of the codebase. You improve your understanding over coding concepts (active recall is far stronger than passive recognition).


Man, same here, those early days of Cursor were mindblowing; but since then autocomplete has stagnated, and even the new Cursor version is veering agentic like everything else.

I hope if/when diffusion models get a little more traction down the line it'll put some new life into autocomplete(-adjacent) workflows. The virtually instantaneous responses of Inception's Mercury models [0] still feel a little like magic; all it's missing is the refinement and deep editor integration of Cursor.

On the subject of diffusion models, it's a shame there aren't any significant open-weight models out there, because it seems like such a perfect fit for local use.

[0] https://www.inceptionlabs.ai/


I've had a lot of enjoyment flipping the agentic workflow around: code manually and ask the agent for code review. Keeps my coding skills and knowledge of the codebase sharp, and catches bugs before I commit them!

if it catches a lot of bugs maybe you’d be better of letting it write it in the first place :)

IME, not really. When you prompt it to review its own written code, it will end up finding out a bunch of stuff that should have been otherwise. And then you can add different "dimensions" in your prompt as well like performance, memory safety, idiomatic code, etc.

Nono, that is the reverse centaur. Structure your own thoughts, that's the human work.

I can see the logic behind "manual coding" but it feels like driving across country vs taking the airplane. Once I've taken the airplane once, its so hard to go back...

Airplanes are good for certain types of journey, but they're vastly inefficient for almost all of them.

It's more like driving across country vs firing a missile with you being the warhead...

I only see this being the case for throwaway code and prototypes. For production code you want to keep long term it's not so clear cut.

Real life measurements show a 25 percent improvement in coding speed when using AI at best. And this is before you take technical debt into account!

Yes, AI unlocks coding for people who fail FizzBuzz. This isn't really relevant to making software though.


Can't understand this mentality. If I had the time I would much rather never set foot in an airport again. I would drive everywhere. And I would much rather write my own code than pilot an LLM too

You’re describing extremely valid approach for a hobby. Less for a business.

AI autocomplete sucked. Everyone quickly moved on because it is not a useful interface

> AI autocomplete sucked

> Everyone moved on

> it is not a useful interface

You've made three claims in your brief comment and all appear to be false. Elaborate what you mean by any of this?


Why? I thought it was pretty good, just get the rest of your function a lot of times and no context switching to type to an agent or whatever. It just happens immediately and if it's wrong just keep typing till it isn't. You can still use an agent for more complex things.

I just wish I knew of a good Emacs AI auto complete solution.


It’s wildly useful. Type out a ridiculously long function name that describes what you want it to do and often… there it is.

Design is very hard to verbally describe, and AI doesn't have good judgement on what is easy to use or attractive.

I think it's because it's non-deterministic too. You can't iteratively improve design the same way you can code.

If they wanted couldn't they do something like RLHF? Instead of humans picking the best of 2 text outputs, they pick the best rendered design

I'd be very surprised if they're not already doing this.

Yeah I'm not a huge fan of it, either. Well organized CSS is much nicer to work with. On the other hand, I'd prefer Tailwind to badly organized CSS.

Figma's stock has been on a sharp downward trend over the last year. This isn't a notice-able change to their stock price at all. They're down 30% just in the last month, with many days being -5% to -10%.

They're down 80% over the last year. Ouch.


"attractive things work better"

There have been studies showing aesthetics matter quite a bit for UX - users perceive things that are attractive as being easier to use and less frustrating.


Surely they weren't trying to be deceptive... surely.

Anthropic is the exact same way, I think they're just trying to avoid having 5 different subscription tiers visible. Probably needing 20x is very niche

seems like this $100 replaced the $200 plan

So.. cheaper?


No, the same $200 plan is still there. They hid it behind the $100 click-through.

This just adds a $100 plan that's 1/4 the usage of the $200 plan..


without a human control how do I put these results into context? It doesn't matter that they "hoped" for $2.50/lead

This was due to Claude Code the agent harness. 4.6 was trained to use tools and operate in an agent environment. This is different from there being a huge bump in the underlying model's intelligence.

The takeaway here I think is that the "breakthrough" already happened and we can't extrapolate further out from it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: