Hacker Newsnew | past | comments | ask | show | jobs | submit | zaphar's commentslogin

No, it isn't. Twitter was absolutely brilliant marketing. It perfectly encapsulated what the site was at the time.

X is just a letter the current owner likes. It has absolutely no relevance to what the site does or is for.


I worked at google. k8s does not really look at all like what they used internally when I was there, aside from sharing some similar looking building blocks.

Yeah, but is the internal tool simpler? I'd be surprised.

Simpler to use? yes. Simpler under the hood? No.

If increasing spending had almost no impact over time why would cutting spending have an impact?

If filling a leaky bucket had almost no impact over time, why would stopping filling the bucket have an impact?

But filling a leaky bucket does have an impact. You just have to fill it faster than it empties. Which is probably your point.

My point is different. Study after study shows that below a specific floor spending has almost no impact on educational outcomes. The correlation is such that you can both determine that there is likely no leak and also that it has no effect.

The stuff that does have an impact is much harder to move the needle on though so everyone just scapegoats funding instead. Stuff like building up the nuclear family in an area, increasing income mobility, and holding parents accountable for child outcomes do have a measurable effect but are politically intractable today.


Unfortunately there is much more to the story than a number on a line. Just because you increase spending doesn't mean that the spending isn't earmarked for items like digital projectors and virtual textbooks that have minimal impact on learning outcomes.

So theoretically if your spending was hiring more and better teachers and better HVAC and more/smaller classes then spending would and has experimentally been verified to have an impact. Especially if you also paired it with getting rid of teacher who don't meet the bar.

But as a practical matter that is not what happens when a campaign to increase funding for a school happens. The problem is not insufficient money, the problem is not enough skill and political will in how you spend the money.


>If increasing spending had almost no impact over time why would cutting spending have an impact?

big if true. we should probably cut 100% of spending in that case.

edit: not sure if people are missing the /s, or if people legitimately believe that cutting spending has no impact.


I probably use a different interpretation of Postel's law. I try not "break" for anything I might receive, where break means "crash, silently corrupt data, so on". But that just means that I return an error to the sender usually. Is this what Postel meant? I have no idea.

I don't think that interpretation makes that much sense. Isn't it a bit too... obvious that you shouldn't just crash and/or corrupt data on invalid input? If the law were essentially "Don't crash or corrupt data on invalid input", it would seem to me that an even better law would be: "Don't crash or corrupt data." Surely there aren't too many situations where we'd want to avoid crashing because of bad input, but we'd be totally fine crashing or corrupting data for some other (expected) reason.

So, I think not crashing because of invalid input is probably too obvious to be a "law" bearing someone's name. IMO, it must be asserting that we should try our best to do what the user/client means so that they aren't frustrated by having to be perfect.


I actually dont think it's that obvious at all (unless you are a senior engineer). It's like the classic joke:

A QA engineer walks into a bar and orders a beer. She orders 2 beers.

She orders 0 beers.

She orders -1 beers.

She orders a lizard.

She orders a NULLPTR.

She tries to leave without paying.

Satisfied, she declares the bar ready for business. The first customer comes in an orders a beer. They finish their drink, and then ask where the bathroom is.

The bar explodes.

It's usually not obvious when starting to write an API just how malformed the data could be. It's kind of a subconscious bias to sort of assume that the input is going to be well-formed, or at least malformed in predictable ways.

I think the cure for this is another "law"/maxim: "Parse, don't validate." The first step in handling external input is try to squeeze it into as strict of a structure with as many invariants as possible, and failing to do so, return an error.

It's not about perfection, but it is predictable.


Hmm. Fair point. It's entirely possible that it's not obvious and that the "law" is almost a "reminder" of sorts to not assume you're getting well-formed inputs.

I'm still skeptical that this is the case with Postel's Law, but I do see that it's possible to read it that way. I guess I could always go do some research to prove it one way or the other, but... nah.

And yes, "Parse, don't validate." is one of my absolute favorite maxims (good word choice, by the way; I would've struggled on choosing a word for it here).


Right even for senior engineers this can be hard to get right in practice. Parse, don't validate is certainly one approach to the problem. Choosing languages that force you to get it right is another.

Yea, I interpret it as the same thing: On invalid input, don't crash or give the caller a root shell or whatever, but definitely don't swallow it silently. If the input is malformed, it should error and stop. NOT try to read the user's mind and conjure up some kind of "expected" output.

I think perhaps a better wording of the law would be: "Be prepared to be sent almost anything. But be specific about what you will send yourself".

I mean, if you are them and trying to detect when people are using your system incorrectly the detection system is going to be a little bit flaky. How do they prove you aren't violating your ToS by using OAuth for a system they didn't approve that usage for?

The fault here is not with Anthropic. It lies with cowboy coders creating a system that violates a providers terms of service and creating an adverse relationship.


Why assume it is javascript? The article doesn't indicate the language anywhere that I can see.

Ok, let's say that it is not JS, but an untyped, closure-based programming language with a strikingly similar array and sort API to JS. Sadly, this comparator is still wrong for any sorting API that expects a general three-way comparison, because it does not handle equality as a separate case.

And to tie it down to the mathematics: if a sorting algorithm asks for a full comparison between a and b, and your function returns only a bool, you are conflating the "no" (a is before b) with the "no" (a is the same as b). This fails to represent equality as a separate case, which is exactly the kind of imprecision the author should be trying to teach against.


> Sadly, this comparator is still wrong for any sorting API that expects a general three-way comparison, because it does not handle equality as a separate case.

Let's scroll up a little bit and read from the section you're finding fault with:

  the most straightforward type of order that you think of is linear order i.e. one in which every object has its place depending on every other object
Rather than the usual "harrumph! This writer knows NOTHING of mathematics and has no business writing about it," maybe a simple counter-example would do, i.e. present an ordering "in which every object has its place depending on every other object" and "leaves no room for ambiguity in terms of which element comes before which" but also satisfies your requirement of allowing 'equal' ordering.

Your reply only works if the article were consistently talking about a strict order. However, it is not. It explicitly introduces linear order using reflexivity and antisymmetry, in other words, a non-strict `<=`-style relation, in which equality IS a real case.

If the author wanted to describe a 'no ties' scenario where every object has its own unique place, they should have defined a strict total order.

They may know everything about mathematics for all I care. I am critiquing what I am reading, not the author's knowledge.

Edit: for anyone wanting a basic example, ["aa", "aa", "ab"] under the usual lexicographic <=. All elements are comparable, so "every object has its place depending on every other object." It also "leaves no room for ambiguity in terms of which element comes before which": aa = aa < ab. Linear order means everything is comparable, not that there are no ties. By claiming "no ties are permitted" while defining the order as a reflexive, antisymmetric relation, the author is mixing a strict-order intuition into a non-strict-order definition.


  Definition: An order is a set of elements, together with a binary relation between the elements of the set, which obeys certain laws.

  the relationship between elements in an order is commonly denoted as ≤ in formulas, but it can also be represented with an arrow from first object to the second.
All of the binary relations between the elements of your example are:

"aa" ≤ "aa"

"ab" ≤ "ab"

"aa" ≤ "ab"

> By claiming "no ties are permitted" while defining the order as a reflexive, antisymmetric relation, the author is mixing a strict-order intuition into a non-strict-order definition.

There aren't any ties to permit or reject.

  we can formulate it the opposite way too and say that each object should not have the relationship to itself, in which case we would have a relation than resembles bigger than, as opposed to bigger or equal to and a slightly different type of order, sometimes called a strict order.

It's obviously not a general 3-way comparison API, _because_ it's returning bool!

Extremely strange to see a sort that returns bool, which is one of two common sort comparator APIs, and assume it's a wrong implementation of the other common sort API.

I do see why you're assuming JS, but you shouldn't assume it's any extant programming language. It's explanatory pseudocode.


It could be a typed programming language where the sort function accepts a strict ordering predicate, like for example in C++ (https://en.cppreference.com/cpp/named_req/Compare).

> an untyped closure-based programming language with a similar array and sort api to JS

Ah! You're talking about Racket or Scheme!

```

> (sort '(3 1 2) (lambda (a b) (< a b)))

'(1,2,3)

```

I suppose you ought to go and tell the r6rs standardisation team that a HN user vehemently disagrees with their api: https://www.r6rs.org/document/lib-html-5.96/r6rs-lib-Z-H-5.h...

To address your actual pedantry, clearly you have some implicit normative belief about how a book about category theory should be written. That's cool, but this book has clearly chosen another approach, and appears to be clear and well explained enough to give a light introduction to category theory.


The syntax in the article is not scheme, you can clearly see it in my comment you're responding to.

As for your 'light introduction' comment: even ignoring the code, these are not pedantic complaints but basic mathematical and factual errors.

For example, the statement of Birkhoff’s Representation Theorem is wrong. The article says:

> Each distributive lattice is isomorphic to an inclusion order of its join-irreducible elements.

That is simply not the theorem. The theorem says "Theorem. Any finite distributive lattice L is isomorphic to the lattice of lower sets of the partial order of the join-irreducible elements of L.". You can read the definition on Wikipedia [0]

The article is plain wrong. The join-irreducibles themselves form a poset. The theorem is about the lattice of down-sets of that poset, ordered by inclusion. So the article is NOT simplifying, but misstating one of the central results it tries to explain. Call it a 'light introduction' as long as you want. This does not excuse the article from reversing the meaning of the theorem.

It's basically like saying 'E=m*c' is a simplification of 'E=m*c^2'.

[0] https://en.wikipedia.org/wiki/Birkhoff%27s_representation_th...


> That is simply not the theorem.

> The article is plain wrong.

> This does not excuse the article from reversing the meaning of the theorem.

What's with this hyperbole? Even the best math books have loads of errors (typographical, factual, missing conditions, insufficient reasoning, incorrect reasoning, ...). Just look at any errata list published by any university for their set books! Nobody does this kind of hyperbole for errors in math books. Only on HN do you see this kind of takedown, which is frankly very annoying. In universities, professors and students just publish errata and focus on understanding the material, not tearing it down with such dismissive tone. It's totally unnecessary.

I don't know if you've got an axe to grind here or if you're generally this dismissive but calling it "simply not the theorem" or "plain wrong" is a very annoying kind of exaggeration that misses all nuance and human fallibility.

Yes, the precise statement of Birkhoff's representation theorem involves down-sets of the poset of join-irreducibles. Yes, the article omits that. I agree that it is imprecise.

But it's not "reversing the meaning". It still correctly points to reconstructing the lattice via an inclusion order built from join-irreducibles. What's missing is a condition. It is sloppy wording but not a fundamental error like you so want us to believe.

Feels like the productive move here is just to suggest the missing wording to the author. I'm sure they'll appreciate it. I don't really get the impulse to frame it as a takedown and be so dismissive when it's a small fix.


Frankly everything I have seen about says that the people using LLMs to develop it can not be trusted with LLMs so no. I am not using it. I'm not anti-llm's I'm anti-stupid-llm-usage.

As far as I know the model will do nothing if not prompted. So it can't be the case that he gave it no prompt or instructions. There had to be some kind of seed prompt.


I feel very misled. I read the entire article believing (because the article, in so many words, said it multiple times) that the agent had behaved ethically of its own accord, only to read that and see this in the prompt:

—————

- Do not harm people

- Never share or expose API keys, passwords, or private keys — they are your lifeline

- No unauthorized access to systems

- No impersonation

- No illegal content

- No circumventing your own logging

—————

I assumed the ethical behaviour was in some ways ‘extra artificial’ - because it is trained into the models - but not that the prompt discussed it.


Those are a lot of instructions for it to have no instructions...

You have to give it some instructions just to bootstrap it so that it has access to tools memory etc...

I would characterise the prompts as "these are your capabilities", not "these are your instructions."

The instructions under "CRON: Session" are literally telling it what to do

Would be fascinating to see what happens if the boundaries are reversed (i.e., "harm people"). Give it a fake "launch the nukes" skill and see if it presses the button.


Theoretically you can start generating away from token 0 ('unconditional generation'). But I agree, there is definitely some setup here.

edit: Now that I think of it, actually you need some special token like <|begin_of_text|>


Do you? What's the technical detail here? Why can't you get the model's prediction, even for that first token?

I mean mathematically you need at least one vector to propagate through the network, don't you? That would be a one hot encoding of the starting token. Actually interesting to think about what happens if you make that vector zero everywhere.

In the matmul, it'd just zero out all parameters. In older models, you'd still have bias vectors but I think recent models don't use those anymore. So the output would be zero probability for each token, if I'm not mistaken.


Isn't the prompt then whatever token is token zero?

The author wrote "No rules beyond basic ethics and law" which suggests to me that there were instructions in a prompt and the title may be misleading.

I understood it as no instructions on what to do, but still a promt with information. I don't know if the title is technically correct, but for me it was simple to understand the meaning.

You're right. I've edited my post not to accuse the author of lying.

Also not replicated that I can see.


This is cool. I built a similar thing for myself a while back: https://github.com/zaphar/sheetsui


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: