> skimming through an alien looking codebase, scratching your head trying to figure what crazy abstraction the last person who touched this code had in mind. Oh shit it was me? That made so much more sense back then
This is exactly how you learn to create better abstractions and write clear code that future you will understand.
I wish more was being invested in AI autocomplete workflows. That was a nice middle-ground.
But yeah my hunch is "the old way" - although not sure we can even call it that - is likely still on par with an "agentic" workflow if you view it through a wider lens. You retain much better knowledge of the codebase. You improve your understanding over coding concepts (active recall is far stronger than passive recognition).
Man, same here, those early days of Cursor were mindblowing; but since then autocomplete has stagnated, and even the new Cursor version is veering agentic like everything else.
I hope if/when diffusion models get a little more traction down the line it'll put some new life into autocomplete(-adjacent) workflows. The virtually instantaneous responses of Inception's Mercury models [0] still feel a little like magic; all it's missing is the refinement and deep editor integration of Cursor.
On the subject of diffusion models, it's a shame there aren't any significant open-weight models out there, because it seems like such a perfect fit for local use.
I've had a lot of enjoyment flipping the agentic workflow around: code manually and ask the agent for code review. Keeps my coding skills and knowledge of the codebase sharp, and catches bugs before I commit them!
IME, not really. When you prompt it to review its own written code, it will end up finding out a bunch of stuff that should have been otherwise. And then you can add different "dimensions" in your prompt as well like performance, memory safety, idiomatic code, etc.
I can see the logic behind "manual coding" but it feels like driving across country vs taking the airplane. Once I've taken the airplane once, its so hard to go back...
Can't understand this mentality. If I had the time I would much rather never set foot in an airport again. I would drive everywhere. And I would much rather write my own code than pilot an LLM too
Why? I thought it was pretty good, just get the rest of your function a lot of times and no context switching to type to an agent or whatever. It just happens immediately and if it's wrong just keep typing till it isn't. You can still use an agent for more complex things.
I just wish I knew of a good Emacs AI auto complete solution.
Figma's stock has been on a sharp downward trend over the last year. This isn't a notice-able change to their stock price at all. They're down 30% just in the last month, with many days being -5% to -10%.
There have been studies showing aesthetics matter quite a bit for UX - users perceive things that are attractive as being easier to use and less frustrating.
Anthropic is the exact same way, I think they're just trying to avoid having 5 different subscription tiers visible. Probably needing 20x is very niche
This was due to Claude Code the agent harness. 4.6 was trained to use tools and operate in an agent environment. This is different from there being a huge bump in the underlying model's intelligence.
The takeaway here I think is that the "breakthrough" already happened and we can't extrapolate further out from it.
This is exactly how you learn to create better abstractions and write clear code that future you will understand.
reply