Recently rewatched Demolition Man (1993) where criminals are frozen in cryostasis and then reanimated – a very prescient film. All I could think of was Demolition Pig
If I may "yes, and" this: spec → plan → critique → improve plan → implement plan → code review
It may sound absurd to review an implementation with the same model you used to write it, but it works extremely well. You can optionally crank the "effort" knob (if your model has one) to "max" for the code review.
> You should start a new session for the code review to make sure the context window is not polluted with the work on implementation itself.
I'm just a sample size of one, but FWIW I didn't find that this noticably improved my results.
Not having to completely recreate all the LLM context neccessary to understand the literal context and the spectrum of possible solutions (which the LLM still "knows" before you clear the session) saves lots of time and tokens.
Interesting, I definitely see better results on a clean session. On a “dirty” session it’s more likely to go with “this is what we implemented, it’s good, we could improve it this way”, whereas on a clean session it’s a lot more likely to find actual issues or things that were overlooked in the implementation session.
I was paying both $200+/mo and I went down to only paying Anthropic $200/mo.
My experience has, for a few months, been that OpenAI's models are consistently quite noticeably better for me, and so my Codex CLI usage had been probably 5x as much as my Claude Code usage. So it's a major bummer to have cancelled, but I don't have it in me to keep giving them money.
I'd love to get off Anthropic too, despite the admirable stance they took, the whole deal made me extra uncomfortable that they were ever a defense contractor (war contractor?) to begin with.
I left the openai platform long before this, because I expected things like this. A few called me alarmist but are now also jumping ship because of this. OpenAI has zero moral or ethical substance and people _do_ care about that. I'm extreme enough that joining openAI after a specific date works against you and your CV, not with/for you, while leaving at a specific date speaks volumes in favour of you. People are the sum of their actions, not their words and siding with / continuing to use openAI speaks volumes on who you are.
This thread reminds me how Javas heavy GUI written in Java itself was called "lightweight" when in fact it did not feel lightweight at all on the hardware of the time.
During a session, PCB Tracer reads and writes over a dozen different file types — including images, schematics, datasheets, netlists, and revision history. It also has an AutoSave feature to prevent losing your work. Every file is saved to your project directory during a session. Doing all of this without constant requests for use permission requires the File System Access API, which is not yet available in all browsers. The Firefox developers has explicitly stated that this API will not be supported.
Honestly that's exactly what it would look like if someone posted malware to a show HN. I'm not claiming that's what this is, just that it's _exactly_ what it would look like so you'd have to be braindead to go that route.
The US govt & Hegseth are in a pickle, because if they blackball Anthropic, they will become more powerful than govt. could ever imagine, because it would be the greatest PR any frontier model could ever hope for.
It's a mistake for the Trump administration because there are only downsides to threatening Anthropic if they need them, and if they try to regulate AI in the West, China wins by default.
> f = λa λb concat ["Hello ",a," ",b,"!"] > f "Jane" "Doe" Hello Jane Doe!
then,
> g = f "Admiral" > invert g "Hello Admiral Alice!" Alice
reply