Hacker Newsnew | past | comments | ask | show | jobs | submit | pgt's commentslogin

The inversion is really cool, e.g.

> f = λa λb concat ["Hello ",a," ",b,"!"] > f "Jane" "Doe" Hello Jane Doe!

then,

> g = f "Admiral" > invert g "Hello Admiral Alice!" Alice


@dang, pleaaase can we get proper markdown formatting on HN? I tried adding two spaces after each line, but I don't want paragraphs between code

4 spaces indent

The inversion is really cool, e.g.

    > f = λa λb concat ["Hello ", a, " ", b, "!"] 
    > f "Jane" "Doe" 
    Hello Jane Doe!
then,

    > g = f "Admiral" 
    > invert g "Hello Admiral Alice!" 
    Alice

Fellow software engineers, what are we doing here? Why are we letting the EU / UK define the future of software?

1. The UK and EU are rather large markets that they don’t want to miss out on.

2. There are software engineers in the UK and EU.

3. This specific implementation by Apple is not actually required by any UK or EU law, to my knowledge.

4. This specifically is or will be required by the laws of some US states and other countries.


1 Since when is Linux about marketing? And who is "they"?

2 Devs for companies can start working with proprietary OSes for the businesses they sell their soul to.

3 Who cares what apple is doing.

4 And systemd should not be liable for upholding any of them.


Maybe carefully read TFA - the age verification came from a Californian law

"Apolitical" technology

Recently rewatched Demolition Man (1993) where criminals are frozen in cryostasis and then reanimated – a very prescient film. All I could think of was Demolition Pig


I am getting disproportionately good results with the models by following a process: spec -> plan -> critique -> improve plan -> implement plan.


If I may "yes, and" this: spec → plan → critique → improve plan → implement plan → code review

It may sound absurd to review an implementation with the same model you used to write it, but it works extremely well. You can optionally crank the "effort" knob (if your model has one) to "max" for the code review.


A blanket follow-up "are you sure this is the best way to do it?"

Frequently returns, "Oh, you are absolutely correct, let me redo this part better."


You should start a new session for the code review to make sure the context window is not polluted with the work on implementation itself.

At the end of the day it’s an autocomplete. So if you ask “are you sure?” then “oh, actually” is a statistically likely completion.


> You should start a new session for the code review to make sure the context window is not polluted with the work on implementation itself.

I'm just a sample size of one, but FWIW I didn't find that this noticably improved my results.

Not having to completely recreate all the LLM context neccessary to understand the literal context and the spectrum of possible solutions (which the LLM still "knows" before you clear the session) saves lots of time and tokens.


Interesting, I definitely see better results on a clean session. On a “dirty” session it’s more likely to go with “this is what we implemented, it’s good, we could improve it this way”, whereas on a clean session it’s a lot more likely to find actual issues or things that were overlooked in the implementation session.


Can you give a little more detail how you execute these steps? Is there a specific tool you use, or is it simply different kinds of prompts?


I wrote it down here: https://x.com/BraaiEngineer/status/2016887552163119225

However, I have since condensed this into 2 prompts:

1. Write plan in Plan Mode

2. (Exit Plan Mode) Critique -> Improve loop -> Implement.


I follow a very similar workflow, with manual human review of plans and continuous feedback loops with the plan iterations

See me in action here. It's a quick demo: https://youtu.be/a_AT7cEN_9I


similar approach


No one left ChatGPT over that deal: they decided to try Anthropic's Claude because the Department of War gave them free marketing.


I was paying both $200+/mo and I went down to only paying Anthropic $200/mo.

My experience has, for a few months, been that OpenAI's models are consistently quite noticeably better for me, and so my Codex CLI usage had been probably 5x as much as my Claude Code usage. So it's a major bummer to have cancelled, but I don't have it in me to keep giving them money.

I'd love to get off Anthropic too, despite the admirable stance they took, the whole deal made me extra uncomfortable that they were ever a defense contractor (war contractor?) to begin with.


I left the openai platform long before this, because I expected things like this. A few called me alarmist but are now also jumping ship because of this. OpenAI has zero moral or ethical substance and people _do_ care about that. I'm extreme enough that joining openAI after a specific date works against you and your CV, not with/for you, while leaving at a specific date speaks volumes in favour of you. People are the sum of their actions, not their words and siding with / continuing to use openAI speaks volumes on who you are.


The DoW or the CEO of Anthropic and his telenovela?


modelless


This thread reminds me how Javas heavy GUI written in Java itself was called "lightweight" when in fact it did not feel lightweight at all on the hardware of the time.


I wanted to test this but if I decline file access I can't do anything. What gives? Do you want people to understand your product? Demo your product.

Why do you need file access to sell me?

Closed immediately.

btw. I am your target market.


During a session, PCB Tracer reads and writes over a dozen different file types — including images, schematics, datasheets, netlists, and revision history. It also has an AutoSave feature to prevent losing your work. Every file is saved to your project directory during a session. Doing all of this without constant requests for use permission requires the File System Access API, which is not yet available in all browsers. The Firefox developers has explicitly stated that this API will not be supported.


Honestly that's exactly what it would look like if someone posted malware to a show HN. I'm not claiming that's what this is, just that it's _exactly_ what it would look like so you'd have to be braindead to go that route.


This is similar to how Clojure transducers are implemented: "give me the next thing plz." – https://clojure.org/reference/transducers


The US govt & Hegseth are in a pickle, because if they blackball Anthropic, they will become more powerful than govt. could ever imagine, because it would be the greatest PR any frontier model could ever hope for.

It's a mistake for the Trump administration because there are only downsides to threatening Anthropic if they need them, and if they try to regulate AI in the West, China wins by default.


No, this time is different.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: