Hacker Newsnew | past | comments | ask | show | jobs | submit | uoaei's commentslogin

Your refusal to interact with subtext has me guffawing. I wonder if you even recognize what you're doing.

In the history of revolution, there is never (except in elementary school) all that much weight put on the singular act which instigated the final result. The conditions in place (Jim Crow laws, Southern pride, etc.) lead up to a final moment which our monkey brains like to point to as the cause but in reality there is a simmering cultural froth which could boil over in any number of ways: it just happens that one of the ways is what's described in the Wikipedia article, but it could have started many other ways. All of our understanding about the experience of being Black in the US during that time helps to contextualize the extreme and disproportionate outburst of violence by the White population as racially motivated, serving under an ideology best described as ur-"Great Replacement Theory".

In simpler words, the destruction of Black Wall Street is not without precedent, indeed this was merely one of the more famous and complete examples of destroying the wealth that Black people enjoyed, if only briefly due to the hate of those visiting violence upon them.


> I wonder if you even recognize what you're doing.

"Don't feed the trolls". They absolutely do know what they're doing.


Exactly, even in the throes of today's wacky economic tides, storage is still cheap. Write the model state immediately after the N context messages in cache to disk and reload without extra inference on the context tokens themselves. If every customer did this for ~3 conversations per user you still would only need a small fraction of a typical datacenter to house the drives necessary. The bottleneck becomes architecture/topology and the speed of your buses, which are problems that have been contended with for decades now, not inference time on GPUs.

This has nothing to do with the cost of storage. Surprisingly, you are not better informed than Anthropic on the subject of serving AI inference models.

A sibling comment explains:

https://news.ycombinator.com/item?id=47886200


They don't cache model state to disk. I am proposing they do.

I’m proposing that you should educate yourself on the subject of LLM KV context caching.

So is it likely this one merely escaped? I find it hard to believe someone who would own one of these would not be an enthusiast, and that enthusiasts wouldn't find another owner for a critically endangered species rather than merely drop it under a local bridge.

No it is extremely unlikely this is an "escape". This would be lucky to survive for a week in Europe. Almost certainly what happened is someone bought one and then realized they are too complicated to take care of and decided to dump it in a spot they thought looked pretty

Also there are 1,000 of these in the wild but there are over a million of them in captivity. You can get a typical morph for about $50.


It's this, for sure. An axolotl is not going to live in the wild. I own a home near a public pond. There are pretty much always fancy goldfish swimming in it during the time of the year that everyone moves out. People just decide not to keep their fish.

> I find it hard to believe someone who would own one of these would not be an enthusiast

You underestimate how many people lack impulse control or consideration over their choices, and their lack of understanding of consequences when buying a living organism.


I recommend for you to read Feyerabend's Science in a Free Society.

I'm not sure I see the relevance?

Yours is a rather pedestrian dorm-room take on epistemology and relevance of the moral dimension to social progress whose flaws are addressed in longform by Feyerabend.

What a bizarre response.

I was relaying the technical details of working in these data environments based on deep, real-world operational experience in the domain. There is no "moral dimension" to it, I was describing the world as it exists.

Does Feyerabend also have an opinion on compiler flags and sorting algorithms?


There is a moral dimension, you're just choosing not to acknowledge it.

Feyerabend speaks to things that add context and nuance to the effects, consequences, and provisions of the things you've felt comfortable discussing so far.

Hence the recommendation. Your awareness could use some expanding, if I may be so blunt.


It read like a longtime adderall addict who switched to clean meth a while ago.

It read like a C- college sophomore dudebro who read some Ayn Rand and Raspail and Yockey and said "I fucking am John Galt", hit a bong, and got to scribbling their 'manifesto'.

All three of these suggestions are likely true. I’ve never done Ketamine but I’ve heard it can seriously degrade the user’s “quality control” of their ideas, meaning that ideas that they have, or ideas they get from others, that are intellectually subpar appear to be quite brilliant. The dissociation is also helpful for overcoming moral qualms if they were ever present.

Combine that with speed and a insular SV culture steeped in the ideology of Ayn Rand and Nick Land (who likely suffered from amphetamine psychosis) and you get something like this Palantir manifesto.

I would feel sorry for them if they weren’t building skynet.


I feel sorry for them because some of them will never wake up to experience life

And that's deeply sad to me


Most of this tech oligarch group seems pathetic in this way. It's tragic, yes, but they're making their own bed.

> you see the tax payer as a cunning and evil adversary that needs to be reigned upon, and you see that all the jokes, the water cooler talk, the general ethos is toward this vision of the tax payer

Any force employing threat of violence for control does the same. Police presence, military occupation, hell you even see it in the eyes of loss prevention folks.


There are morally neutral technologies, but the unique quality of surveillance data containing PII (and tools to correlate across time and space) means that it's only morally neutral until it is used in any capacity. Which is to say, it is not morally neutral.

You've already made a pretty big leap from surveillance to storing surveillance data persistently, and another to the tools. I'm not going to argue that mass surveillance is morally neutral.[1]

Tolkien's Palantirs let you see and communicate and influence across vast distances. That's no more immoral than a videophone. Of course, that's also not surveillance; that'd be a telescope. But surely telescopes aren't immoral?

[1] I mean, I would, but (1) you can't create a mass surveillance system from a morally neutral or positive place, and (2) it seems nearly impossible to implement a mass surveillance system without creating more harm than benefit. So it becomes a boring semantics argument as to whether mass surveillance is fundamentally immoral or not.


I remember seeing postings for "Forward Deployed Engineers" and thinking that this naming convention targets folks who don't like to work out but still have a military fetish and want to feel important.

It's self-aggrandizing egos all the way down/up (to Alex Karp).


Only if you refuse to acknowledge the deleterious effects on society from ruining any notion of trust.

Being held hostage by bad actors until you fortify your defenses seems to be a very unnecessary technical solution to an easily predicted social one.

Normalizing this only means the subterfuge becomes more subtle, not that you remove it entirely. But you preserve the incentives by not changing this system. So you're spending a lot of resources wastefully when you could just... not.


Right, in the minds of millions right now there's literally no reason to fight for this country. This country certainly ignores their existence and refuses to honor most forms of basic rights as health, education, and shelter. It really only cares if you mess up your tax return.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: