You should not install Claude Desktop or Claude Code unless you trust Anthropic. You either trust them to be a responsible custodian of your compute environment or you don't.
I mean it almost doesn't matter what is installed at any given time, the agent is going to install stuff you can't realistically observe, the software will auto-update, there is simply no way you can be sure spyware won't end up on your computer.
Having faith on a for-profit organization about doing the right thing, with access to your computer and the things you do on it, may be a bit too much.
It was always quite a simple thing to do: “disclosure”. Explain me, in plain English, the things you are going to do when I install your software: do not bury it on a 40-page EULA with multiple amendments referring to different aspects that affect me and for which I would probably need a lawyer, or their very service to understand it, and that is of course subject to be changed at any time they feel.
It’s 2026 and they keep on nagging it: even Apple stopped doing the little summary at the beginning of the “Accept the New Terms” where they explained, in plain English, what those changes were.
And every time they do that, it is always on their favor: you code and eat pizza, they have a 1000 dollar an hour group of lawyers, ironing the hell out of their legal terms to must accept to use their services.
I am not telling you what to do, I am saying that Claude Code and Claude Desktop are not "normal" pieces of software that you can install once and choose to upgrade or not. It's a semi-alive agentic daemon. This is not something you can firewall and upgrade once a quarter after reviewing the changelog.
Why are you taking what is clearly a legal problem and making it about the technology? The law could simply grant attorney-client privilege to chatbots. Nobody is arguing the advice was bad or more expensive than a real lawyer.
Chatbots are not people. They are computer programs. And there's no other realm I can think of where merely interfacing with a computer program breaks attorney-client privilege.
It is equivalent to saying an email to your lawyer breaks privilege because you communicated with gmail. And it gets turbofucked when you consider that a program may be sending your information to an LLM. Would this same judge rule that having copilot installed in Outlook also breaks privilege because they "chatted with an outside party" while drafting an email (even if they didn't intend to send it to copilot)?
I can't think of a reason this isn't about the technology.
I do think we generalize too much, and "merely interfacing with a computer program" is too big a generalization. I could imagine that a videochat with your lawyer is protected because the very definition includes client / lawyer communications, ie., a "video chat with your lawyer" cannot happen without you and your lawyer yet there it is, both interfacing with a videochat program.
A chat with a chatbot never needs a lawyer to be a "chat with a chatbot" lol
The obligations placed on lawyers with regards to misrepresentation are a kind of check on the power of attorney-client privilege which would generally not exist for chatbots, so it's not obvious that this would be a good idea.
Feature delivery rate by Anthropic is basically a fast takeoff in miniature. Pushing out multiple features each week that used to take enterprises quarters to deliver.
Hard to wanna go all-in on the Anthropic ecosystem with how inconsistent model output from their top-tier has been recently. I pay $$$ for api-level opus 4.6 to avoid any low-tier binning or throttling or subversive "its peak rn so we're gonna serve up sonnet in place of opus for the next few hours" but I still find that the quality has been really hit or miss lately.
The bell curve up and then back down has been so jarring that I am pivoting to fully diversifying my use of all models to ensure that no one org has me by the horns.
But the default 1M context window just rolled out a few weeks ago. If refreshing old sessions on 1M context windows is the problem, it's completely aligned with what Boris is saying.
That's a good view point. Perhaps they're not being alarmists or trying to scare people, but being honest about the capabilities.
Perhaps it can be better articulated and framed in a way that's well received. But, maybe that would be over-promising or not being honest about the future.
So, regardless of whether you think it's great that Opus gives this info, we need better solutions than legal liability for US corporations. When the open models have the ability to do damage, there's nobody to sue, no data center obstruction that will work. That's just the reality we have to front-run.
This is only true for small companies that can infinitely scale within AWS without anyone noticing.
You are talking about software scaling patterns, Anthropic is running into hardware limitations because they are maxing out entire datacenters. That's not an architectural decision it's a financial gamble to front-run tens of billions in capacity ahead of demand.
It doesn't really matter what they want. Chat interfaces are doing this from the opposite direction, pulling the data down and explaining it to you, it's not a big leap for LLMs to turn their markdown responses into a slightly richer experience you can browse natively.
The difference is that Airbnb customers used Airbnb because they thought hotel regulations were dumb and overbearing (or at least, they didn't care about the laws). Delve customers were literally trying to obey the law and Delve (allegedly) lied to them about it.
I mean it almost doesn't matter what is installed at any given time, the agent is going to install stuff you can't realistically observe, the software will auto-update, there is simply no way you can be sure spyware won't end up on your computer.
reply