My only caveat would be that in some security fixes, the pure code delta, is not always indicative of the full exploit method. But LLMs could interpolate from there depending on context.
> i presume they wont let you “manage all your AI spend in one place” for free.
Of course they will. In return they get to control who they’re routing requests to. I wouldn’t be surprised if this turns I to the LLM equivalent of “paying for order flow”.
shivers? as in it frightens you? i believe there is no way around tokens being prices like gasoline at the gas station - it changes every hour. Any other system means you are either over- or underspending.
They can route between models but you pay the standard rate for whichever model is selected (plus 5% fee). Afaik all current model providers have fixed prices per tokens which don't vary depending on, say, demand or hardware availability.
His point is that you don't need a working domain name since the MCP can just hardcode the IPs of the servers or resolve them through any other method that isn't DNS.
Would be fairly easy for them to offer an onion service on which they publish the current list of domains, as one option among many, many options for distributing small strings on the internet in an uncensorable way.
Ideally it is common knowledge that the onion service exists, and then people can go look at the onion service and update Wikipedia based on what they see there.
If you are really interested we could try piping their [API](https://encyclopediaapi.com/products/index) to some printable format. Maybe we can even find a quality print on demand service or bind it by hand :)
Different burden of proof. Why waste years trying to get server logs that may not exist when you can get a quick win? It's not about the money anyway. It's about the PR and whatever justification they can derive along the way.
Exactly yes, and that is insecure here because the app relayed the message beyond its layer and ownership. Thus not making the app the end of the communication.
And while I do think code signing alone would’ve helped in the recent issues, what I’d like to see is a sort of automated package scanner that searches for this kind of malware and then publishes a signed report enumerating the things verified alongisde package pypi metadata.
Then I could verify both the package and the scanners result and decide to update or not.
i know this is day dreaming cause who would sponsor scanning and attesting every open source project, anthropic?
My only caveat would be that in some security fixes, the pure code delta, is not always indicative of the full exploit method. But LLMs could interpolate from there depending on context.
reply