Hacker Newsnew | past | comments | ask | show | jobs | submit | 3s's commentslogin

> I think if Bob Odenkirk lived on a community farm where everyone had to work together to survive he would be far happier and think life is far more meaningful.

So you think everyone was happier in the USSR? /s


So you think everyone in the USSR lived on a community farm?

I guess you don't really understand the USSR then...


Probably a better question would be ask the Amish how happy they are? G-D was conceived to fill this gap in the human experience. The Amish harness it to set limits on desire.


From all the parochial nationalistic close minded savage bigoted homophobic toxic macho corrupt Putin boot licking behavior they regularly exhibit, and their spectacularly brutal self-destructive war criminal performance in Ukraine, you'd think they all lived on a community farm, raised as livestock and cannon fodder.

Living in modern cosmopolitan Moscow yet still acting and thinking that way is so much worse than actually being born on a community farm without choosing that lifestyle and mentality.


It’s not top of the line and mostly not open source


Not to mention their recent integration of Persona ID verification - that was the last straw for me.


But they already have PII on nearly all users. Many user upload documents with their name, or pictures of themselves, or have a chat where home addresses are involved. All of this is information anthropic already has on their users (voluntarily provided via chats or via api) and is equivalent to what Persona gets via their verification - it’s just more convenient to use a third party SaaS product for this than vibe coding their own identity verification platform I guess


This might be conflating two things. What data exists somewhere, and how many different independent parties hold it. It's not the same risk.

Put this way: I sort of already trust Anthropic with some of my PII. And that's ... maybe not ok actually. But it's a single failure surface.

But that's definitely not the same thing as trusting Anthropic, AND Persona AND All Persona's partners AND their Partners ad infinitum.

And let's say Persona is actually ok; who knows, they might be? But it's still an extra surface; and if they share again, that's another extra surface again.

It's fairly common sense blast radius minimization. This is part of the actual theory behind GDPR.

"We already seem to accidentally be leaking some data through channel A" , doesn't mean it's a good idea to open channels B-Z as well. It means you might want to tighten down that channel A.


Yes it appears your personal data IS being sent to open router and the model provider here. The problem I think is that a lot of people (especially in the openclaw community) mistake “I run it on my mac mini” to mean their data is private. Meanwhile all data is being shipped off for training to anthropic via openrouter and both of those parties see everything.

I guess you could theoretically plug in a local model here but of course the readme should be more precise here when talking about privacy


The attestation report is produced ahead of time and verified on each connection (before the prompt is sent). Every time the client connects to do an inference request via one of the Tinfoil SDKs, the attestation report is checked relative to a known-good/public configuration to ensure the connection is to a server that is running the right model.


The attestation is tied to the Modelwrap root hash (the root hash is included in the attestation report) so you know that the machine that is serving the model has the right model weights


The absence of solutions for LLM privacy on that list is telling. We’ve figured out how to have private communications with other humans via end to end encryption but arguably we’re leaking a lot more to chatbots about ourselves in a few sessions than we do to even our closest friends and family over Whatsapp


It uses confidential computing primitives like Intel TDX and NVIDIA CC, available on the latest generations of GPUs. Secure hardware like this is a building block to enable verifiably private computation without having to trust the operator. While Confer hasn’t released the technical details yet, you can see in the web inspector that they use TDX in the backend by examining the attestation logs. This is a similar architecture to what we’ve been developing at Tinfoil (https://tinfoil.sh) if you’re curious to learn more!


reminds me of a story called “a disneynand without children” about a planet overtaken by AI pursuing meaningless “inbred” GDP goals and completely neglecting the humans in the process https://open.substack.com/pub/nosetgauge/p/a-disneyland-with...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: