Hacker Newsnew | past | comments | ask | show | jobs | submit | MadsRC's commentslogin

What signing?

Are you referencing the use of Claude subscription authentication (oauth) from non-Claude Code clients?

That’s already possible, nothing prevents you from doing it.

They are detecting it on their backend by profiling your API calls, not by guarding with some secret crypto stuff.

At least that’s how things worked last week xD


I'm referring to this signing bit:

https://alex000kim.com/posts/2026-03-31-claude-code-source-l...

Ah, it seems that Bun itself signs the code. I don't understand how this can't be spoofed.


Ah yes, the API will accept requests that doesn’t include the client attestation (or the fingerprint from src/utils/fingerprint.ts. At least it did a couple of weeks back.

They are most likely using these as post-fact indicators and have automation they kicks in after a threshold is reached.

Now that the indicators have leaked, they will most likely be rotated.


> Now that the indicators have leaked, they will most likely be rotated.

They can't really do that. Now they have no way to distinguish "this is a user of a non updated Claude code" from "this is a user of a Claude code proxy".


Dropped you a mail from mads.havmand@nansen.ai


[flagged]


> If you're a security expert and want to help, email me ...

And

> Dropped you a mail from [email]

I don't think there is any indication of a compromise, they are just offering help.


Hi all, Ishaan from LiteLLM here (LiteLLM maintainer)

The compromised PyPI packages were litellm==1.82.7 and litellm==1.82.8. Those packages have now been removed from PyPI. We have confirmed that the compromise originated from the Trivy dependency used in our CI/CD security scanning workflow. All maintainer accounts have been rotated. The new maintainer accounts are @krrish-berri-2 and @ishaan-berri. Customers running the official LiteLLM Proxy Docker image were not impacted. That deployment path pins dependencies in requirements.txt and does not rely on the compromised PyPI packages. We are pausing new LiteLLM releases until we complete a broader supply-chain review and confirm the release path is safe.

From a customer exposure standpoint, the key distinction is deployment path. Customers running the standard LiteLLM Proxy Docker deployment path were not impacted by the compromised PyPI packages.

The primary risk is to any environment that installed the LiteLLM Python package directly from PyPI during the affected window, particularly versions 1.82.7 or 1.82.8. Any customer with an internal workflow that performs a direct or unpinned pip install litellm should review that path immediately.

We are actively investigating full scope and blast radius. Our immediate next steps include:

reviewing all BerriAI repositories for impact, scanning CircleCI builds to understand blast radius and mitigate it, hardening release and publishing controls, including maintainership and credential governance, and strengthening our incident communication process for enterprise customers.

We have also engaged Google’s Mandiant security team and are actively working with them on the investigation and remediation.


We were not. I reached out to the team at BerriAI to offer my assistance as a security professional, given that they requested help from security experts.


Very interesting!

I’ve got an internal tool that we use. It doesn’t do the deterministic classifier, but purely offloads to an LLM. Certain models achieve a 100% coverage with adversarial input which is very cool.

I’m gonna have a look at that deterministic engine of yours, that could potentially speed things up!


cool - which models are you seeing 100% on adversarial input? I'd love to see the benchmark if you published it somewhere. In my recent sessions while building nah, the deterministic layer handled about 95% of inputs with zero latency/tokens over 13.5k tool calls, 1.5 days of coding, 84% allowed, 12% asked, 5% blocked. All decision logged to ~/.config/nah/nah.log - so you can audit its efficiency


Well, the head of reliability did leave a month or two ago zD


Last I heard of it this was proposed as a directive as opposed to regulation, meaning every single member state would have to interpret it and create their own national implementation. Just like with GDPR.

So 27 individual implementations of this, as opposed to the current 27 different implementations of how to incorporate and assign equity?

Seems… silly?

I’m all for making it more attractive to create startups in the EU… But I don’t think a directive is the right way


This is cool - Whenever I have a new idea for a thing I spend too much time writing boilerplate IAM and backend stuff, taking away time that could be spend on actual business logic. Thought about packaging the boilerplate stuff up before, never gotten around to it. Glad you did!

A thing to consider would be to make it easier (or perhaps bake it in) to separate out parts of the app into a separate origin. Something that would be good for pretty much any SaaS app would be to separate the IAM out (could still embed it with an iframe) - this allows you to keep a fairly tight security policy for the IAM stuff and a more lax one for the rest of the app. Kinda how Google separates out accounts.google.com.


Thanks! That's exactly why I open-sourced it. Instead of this living in my private repo getting occasional updates, now the community can push it forward. Improvements flow back to everyone, including me. Win-win.

Your IAM separation idea is interesting. Separate origin for auth would tighten the CSP significantly. The backend is already modular, so spinning the auth service into its own container with a stricter policy is doable. Worth exploring. Would you mind opening an issue on the repo so I don't lose track of this?


This looks cool, but I’m sad you’ve chosen a name that already associated with another security tool :(


Looking at a possible rebrand in the near future haha.


Now? Chainalysis has always worked for governments…

It was basically spawned out of the government needing help with investigating crypto - I think it was Mt. Gox…


Exactly. “Tracers in the Dark” (https://a.co/d/aos3Nka) does a good job of telling that story and a couple of others from the early days of blockchain analytics


Shameless plug, I pushed a small CLI for detecting unpinned dependencies and automatically fix them the other day: https://codeberg.org/madsrc/gh-action-pin

Works great with commit hooks :P

Also working on a feature to recursively scan remote dependencies for lack of pins, although that doesn’t allow for fixing, only detection.

Very much alpha, but it works.


This looks great!

With the self-host option, it’s not really clear through the docs if one is able to override the base url of the different model providers?

I’m running my own OpenAI, Anthropic, Vertex and Bedrock compatible API, can I have it use that instead?


Thanks!

Yes, you can add 'custom models' and set the base url. More on this here: https://docs.plandex.ai/models/model-settings


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: