FWIW, if you want frontier-level performance, as it was a few months back, Deepseek v4 and K2.6 are there. Almost zero chance you can run them locally, but you do have choice in terms of providers.
Qwen-coder-next is considered SOTA for things you could actually run locally.
Associating dark matter with epicycles is unfair, but it's still a risk (CDM, WDM, SIDM, and probably more by next year). In that light, of course it has more observations (by virtue of creating branches to explain incompatible observations) - but counterpoint is that it has made some predictions.
My stance is that anyone pointing at it in either light probably isn't taking everything into account. It's an incredibly immature theory space - are we going to get 20 more branches of it (making it modern epicycles), or are we going to see one of the current branches pay off?
You only allocate on box futures, which are much more rare than naked futures - generally only used where object safety (essentially dyn support) is required. Even then some workarounds exist.
Tokio is a general purpose async runtime. Much the same could probably be said for async-std (except IIRC they do have a barebones reactor for you to build your own on). In general, a general-purpose async runtime will do worse for highly specific tasks than a purpose-built one (especially e.g. NUMA).
I think avoiding async entirely might be a mistake, and I'm not entirely convinced anything better than a general-purpose async runtime might exist for a JS runtime (it itself is general purpose after all).
Avoiding std::fs is fucking bizarre to me: it's completely sync and is a really lightweight abstraction over syscalls.
my guess is they want to do AI/O as part of their event loop explicitly, and blocking a thread in a syscall waiting for an IOP (ala std::fs) isn't the vibe.
Grok was supposed to be the uncensored frontier model. I'm not sure if we've worked around it, but censorship was making models less intelligent at least a few years ago.
I worked at the industry's first commercial vulnerability lab (Secure Networks) in the mid-90s, and many of my friends at the time founded X-Force. Commercial vulnerability research has always been about marketing: marketing pays for the vulnerability research. That doesn't make it any less prosocial.
hope you are also blacklisting google's project zero, and practically every other major player in the vulnerability reporting space, as all use roughly the same bog standard 90+30 policy.
this was a failure of the kernel security team, and their stance on communicating security issues with their downstreams.
Researchers are under no obligation to engage in coordinated disclosure and are free to sell 0day for profit. Just fyi. Be glad it was disclosed at all. Be glad a patch was available prior to release.
If they want to be seen as responsible rather than opportunistic, then yeah, they should do a proper coordinated disclosure.
Sure, they have no legal obligation to disclose, but we all also have no legal obligation to buy their services. Blacklisting bad actors like this is the right move to discourage this kind of behavior.
they did a proper coordinated disclosure, following the industry standard 90+30 process. that is why the exploit dropped 30 days after the patch landed.
the kernel team should have communicated with their downstream about the importance of the patch. that is the kernel security team's responsibility -- and they are much better positioned to do that than crossing your fingers and hoping every reporter will contact every distro every single time there is a vulnerability.
there are very good reasons disclosure works this way, backed by a couple of decades of debate about it.
how many times it has to be said that it is impossible for linux kernel to communicate with anything but a minuscule portion of its downstream and _that_ has been done?
Who cares about how you are seen when you are selling 0day for big bucks? The bad actor makes more money than the 'legitimate' one without breaking any law. Punishing someone who didn't alert distros despite a patch being available encourages the company to simply find flaws and sell them for profit - it pays more to begin with.
If they want to take advantage of disclosure for marketing, they're either going to need to accept the norms around responsible disclosure, or they're going to need to accept how shirking those norms will come off. That's life in society. Sometimes it's annoying and sometimes it doesn't feel rational, but these norms have been negotiated throughout the history of our industry and are the way they are for reasons good and bad.
I just don't see the point in complaining about how shirking the norms of your industry will make you look irresponsible. I don't really care that they could have decided to sell the vulnerability instead. It isn't material.
It is absolutely not true that viable commercial vulnerability labs need to "accept the norms around responsible disclosure". There are no such norms. "Responsible disclosure" is an Orwellian term cooked up between @Stake and Microsoft and other large vendors to coerce researchers into synchronizing with vendor release schedules. It was fantastically successful at that, and it's worth pushing back on at every opportunity.
Tavis Ormandy dropped Zenbleed right onto Twitter. He's doing fine. You can blacklist him if you want; I imagine he's not going to notice.
Microsoft's policy is: "if you contact us with a vulnerability, you automatically agree to the terms of our responsible disclosure policy", which includes waiting 30 days after patch was created, and says nothing about how long that process takes.
There is actually no way to give them a friendly heads up, and then do your own thing. The only way not to be bound is by not sending them any notification at all...
You can email without agreeing to anything. But for a serious issue Microsoft would obviously try and track down who you are and what jurisdiction you are in.
> The Microsoft Bug Bounty Programs Terms and Conditions ("Terms") cover your participation in the Microsoft Bug Bounty Program (the "Program"). These Terms are between you and Microsoft Corporation ("Microsoft," "us" or "we"). By submitting any vulnerabilities to Microsoft or otherwise participating in the Program in any manner, you accept these Terms.
You said "There is actually no way to give them a friendly heads up, and then do your own thing. The only way not to be bound is by not sending them any notification at all..."
Maybe you're right. I just find it confusing. The language is all-encompassing, doesn't read opt-in to me if taken literally: "By submitting any vulnerabilities to Microsoft". And I found no other pages describing "report in such and such way to have these terms apply instead". But I always have problems with this stuff, perhaps taking it too seriously.
Obviously they can write whatever they want in their policy documents. The thing is, sometimes this is about larger sums of money, or someones reputation, which may or may not actually lead to steps. That is in contrast with whatever TOS/EULA in account signups for some service or whatever, this feels more serious. I've seen some people getting harried after publishing something that fell _outside_ the servicing boundaries. Getting tangled up in whatever is already a loss in my book, even if you "win" in the end.
Note that that policy is also where they set out the safe-harbor conditions, which, according to my read, is tied to the bounty policy and not RD/CVD policy. The RD/CVD page itself specifies no such thing, so I relate them.
I do not speak for MSFT, but last time I spoke with MSRC indeed they would be happy to receive your vulnerability report even if you did not wish to participate in any particular bug bounty program.
Those norms do not exist. Those are people asking companies to do stuff to benefit the person complaining for free, and many companies will not do that.
It seems to me you're unaware of them, but there are strong norms around disclosure. They've been discussed for decades. It is the expectation that vendors would be notified in a scenario like this.
No, there are users who want those to be norms. Qualified researchers happily sell substantive vulns to people who pay (Governments/Cellebrite and companies like that) enough to quell any complaint.
Which is again, irrelevant to the question of how disclosure works and what expectations there are around it because that is not disclosure and is not what was being discussed.
How does someone being incentivized to sell a vulnerability to a private organization over disclosing it publicly preserve a "high trust society"? Do you mean in the context of a "deceptively high-trust society"?
Those private actors aren't planning to sit around and hold onto these exploits they've horded forevermore, they're obviously paying for them so they can one day use them.
Unfortunately this is correct. As a security researcher I set millions in profit on fire for reporting vulns to projects that offer no bounties vs selling to highest bidder. I keep doing it because it is the right thing to do, but I would not blame someone that needs to feed their family making a different choice.
We must get public funds to reward ethical disclosure of big impact vulns like this.
Harder and harder to get good policy like what you describe when tech-adjacent people loudly argue for criminal penalties for anything other than coordinated disclosure :(
Are you claiming that if I sell 0day through a broker to the national Government of a given jurisdictions that the national Government of that jurisdiction is going to criminally penalize me?
If so, that's a bit naive. In the actual world, that buyer wants to buy more stuff from me, not penalize me.
I'm pretty sure they have a legal obligation in most jurisdictions not to sell 0days for profit.
And they absolutely have a moral obligation to do things in a way to minimize damage and impact to other people's systems. (I'm not saying "responsible disclosure" is the correct way to do that, but hoarding vulnerabilities and exploits and selling them to the highest bidder certainly isn't.)
It is categorically false that there's a legal obligation not to sell vulnerabilities. There's an obligation not to knowingly sell them directly to ongoing criminal enterprises. That's it. Plenty of people make fuckloads of money selling vulnerabilities for exploitation rather than repair.
(The buyers are the NSA, the IDF, Cellebrite, NSO and its successor corporation and that kind of thing. Depends on what you are offering)
You'll learn who the buyers are if you routinely have the really good stuff to sell! If you are offering iOS zero click on a semi-regular basis, the buyer is going to want to try to deal with you directly and preferably offer you a more regular form of employment, if you are interested. Some national governments may offer certain benefits to you, depending on your situation.
All depends on what you have to offer. If you were able to offer this https://arstechnica.com/security/2025/09/microsofts-entra-id... or something of that magnitude, a lot of problems in your life would just go away. The buyers would all be Five Eyes and the intelligence gain of having that kind of access even briefly is priceless.
In a more Western-centric context, imagine if you had a flaw like that, same 'no logs are generated' and 'every single customer account is accessible' but the impacted vendor was Alibaba Cloud. The researcher would get to name their price. That's the real world, that's the world we share. We shouldn't be blind to that.
> Researchers are under no obligation to engage in coordinated disclosure and are free to sell 0day for profit.
Uh... no? If you mean legally, some people might, depending on jurisdiction. But also, ethically? yes, researchers are ethically obligated to disclose responsibly.
> Just fyi.
...
> Be glad it was disclosed at all. Be glad a patch was available prior to release.
I am glad that a patch was available. Equally I can be glad that the linux community is strong enough to respond quickly, while also being angry that this person behaves unethically.
Likewise, when people in my industry behave poorly, or unethically; I'm now the person ethically obligated to both point it out, and condemn it. Not to become an apologist demanding I should be happy watching bad things happen, when much of the fallout could have been prevented with a bit less incompetence and ignorance.
> Researchers are under no obligation to engage in coordinated disclosure and are free to sell 0day for profit. Just fyi. Be glad it was disclosed at all.
I'm so glad these so called "researchers" aren't totally evil, I'm so grateful they're only half evil, give them a lollipop.
Whatever, the way they disclosed it isn't much different from no disclosure at all - the exploit would have been identified in the wild and fixed soon thereafter.
the way the disclosed it is the industry standard. think of the biggest security research teams you know (e.g. google), and they follow the same process.
non-security people always seem to get up in arms about it, but there is very good reasons why the industry has landed on the process it has, which has been hashed out over a few decades.
1. Status quo. Researchers are free to disclose to a vendor, free to sell vulns to legitimate companies, free to do full disclosure if they want. This situation benefits security. Researchers are able to pay their bills while also doing meaningful research into OSS projects that are unable to fund the kind of security audit they need. Harm reduction, of sorts.
2. Everyone is a bad actor. No one is going to do this work for free/for a bounty. Horrible flaws will be found and shared with ransomware gangs and the like. 0day will sell for a percentage of the ransom winnings. Researchers will live like kings, everyone else will suffer.
They should have a legal obligation to engage in coordinated/responsible disclosure, and it should be a crime to sell or disclose a 0day to anyone other than a state-designated security organization or the vendor/provider.
If it won’t be handled through criminal law then it’ll be handled through civil litigation: Anyone who was exploited as a result of this disclosure should sue the discloser for contributing to the damage they’ve suffered.
To be clear, the vulnerability existed in Linux, not in Xint Code. It existed whether this group disclosed it or not. Knowledge of it and exploits may have already been bought and sold among various groups with various motives including crime, terrorism, or cyberwarfare who likely made good money off it if this happened.
In that world, the vulnerability has more value to those who seek to exploit it for their own motives, regardless of the consequences. They hope that no one else stumbles on it and fixes it, preventing them from continuing to use it to do bad things.
In the world where it is disclosed, there is more value in fixing the vulnerability as the maintainer’s reputation is at risk (and potentially monetary loss or legal liability if they are shown to be negligent).
There is no such thing as "the responsible disclosure protocol". There's really no such thing as "responsible disclosure" at all, but "the responsible disclosure protocol" is a term I have literally never heard before. (I've been a vulnerability researcher since the mid-1990s, for what it's worth.)
> In computer security, coordinated vulnerability disclosure (CVD, sometimes known as responsible disclosure)
I guess you can learn something new after 36 years.
If you are referring to what you quoted, your pedantry and sharpshooting would result in an incomplete English sentence: "that's why we have the responsible disclosure" is missing a noun. Now that we are firmly in worthless pedantry:
Protocol (n):
1.a. a system of rules that explain the correct conduct and procedures to be followed in formal situations
1.b. a set of conventions governing the treatment and especially the formatting of data in an electronic communications system
If you don't like what I said or disagree, poke holes in factual inaccuracies. However, in the reality that I am pretty sure we all share, responsible disclosure is a well established protocol that is followed by many security researchers, and was imperfectly followed here.
These researchers found a vulnerability in the Linux kernel. They could have just written a blog post and put it online, or not told anybody, or sold it. But instead they decided to tell the Linux kernel devs, and give them time to act before publishing.
And your beef is that you’ve decided they needed to also inform individual downstream projects that use the Linux kernel? Why? Which ones?
I'm all for lighting a fire under the developer's ass, but we live in an imperfect world and the biggest problem that we have is end-users. We may have applied the mitigation on day 0, and updated as soon as the kernel landed in our distro - and if some of us didn't then we've even got savvy users in that "don't update fast enough group" (which is fine, which is human, but is said imperfection).
Major distros should at least have gotten a few days of notice for something this catastrophic. It doesn't help that the kernel is fixed if "normies" aren't able to access it on day 0. For reference, the standard is 30 for the developer to fix and 90 for it to land on machines. Even 30+7 would have been a substantial improvement.
Ethical security research involves ethics, and maybe they aren't referenced in university/college any more - but here's what I was taught: https://www.acm.org/code-of-ethics .
> 1.1 Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing.
> [...] Computing professionals should consider whether the results of their efforts will [...] and will be broadly accessible.
> 1.2 Avoid harm.
> (Honestly, all of it)
> 2.3 Know and respect existing rules pertaining to professional work.
> 3.1 Ensure that the public good is the central concern during all professional computing work.
> People—including users, customers, colleagues, and others affected directly or indirectly—should always be the central concern in computing.
Maybe other code of ethics for CS exist; I'd like to know which ethics these ethical researchers were following.
It’s a commonly followed practice for some people. Notably it’s what was done here: they coordinated disclosure with the Linux kernel devs. And now folks are angry that they didn’t also coordinate with yet more downstream projects.
> For reference, the standard is 30 for the developer to fix and 90 for it to land on machines.
>For reference, the standard is 30 for the developer to fix and 90 for it to land on machines
no, the standard is 90 days from notification or 30 days from the patch date, typically whichever is sooner.
e.g.
> If a vendor patches a security issue 47 days after Project Zero notified
> the vendor about the vulnerability, details would be made public on day 77.
> If a vendor patches a security issue 83 days after Project Zero notified
> the vendor about the vulnerability, details would be made public on day 113.
please also note that you are blindly quoting wikipedia articles at people who either currently work in security research, or used to work in security research. while we are not infallible, you should perhaps consider that we at least have real life experience dealing with vulnerability disclosure processes, and arent just learning about them today from wikipedia. when a room full of experienced professionals are telling you that you are misunderstanding something, that is a sign to step back for a second and maybe reconsider your position.
There isn’t such a thing. Coordinated disclosure (sometimes called responsible disclosure by people who want to inject their morals into one available option so as to paint the others as irresponsible) exists. As has been noted, some large groups like Project Zero use 90/+30, but that isn’t a set protocol; it’s a thing some folks picked and others have copied. If a research group announced tomorrow that they were doing a flat 42 days from notification to release, they would still be doing coordinated disclosure.
haha, for the record, the "used to" was primarily referring to myself, who now teaches the next generation instead of practices! you are probably much more active in the space than i am now adays
You are strongly implying that keeping the vulnerability secret is following of what you quoted. But that’s the rub. Many of us think the opposite. Not disclosing this would have been the violation.
> You are strongly implying that keeping the vulnerability secret is following of what you quoted.
Please don't put words in my mouth when I have clearly stated the contrary. I used the word "disclosure," that is very different to keeping things secret.
You're trying to extrapolate on this specific scenario from Wikipedia pages. Have you done any of this work? What have you done when you've reported a vulnerability to an upstream with dozens of downstreams? When your teammates have? You keep talking about "protocols" and "commonly followed practice" and "codes of ethics". Tell us more about the codes, protocols, and practices in your shop.
Nobody, for what it's worth, is arguing that major distros shouldn't have gotten some kind of notice. The problem is that the entity responsible for doing that isn't the vulnerability research lab. In fact, as a general procedural point, researchers can't go contact downstreams. They might be able to do so in the specific case of Linux, but you've tried to spin that possibility into a binding obligation derived from established practices, which: no. That's not a real thing.
I never said "binding obligation," that is the first time "binding" has appeared in this discussion and was introduced by you. Once again claiming things I have never said. Doing what you are free to do can still be a shitty thing to do.
AI massively empowers people who are incapable of anything except bikeshedding. It itself is very likely to be a bikeshed (but there are legitimate uses), and it also gives them to power to drone on until they overpower any opposition to their useless ideas.
Everything is increasingly expected to gain bikesheds.
>> people who are incapable of anything except bikeshedding
The amount of insulting language directed at people who actually have an open mind about AI and AI tooling is frustrating. Can you all just please address the merits of the topic of the post instead of making every AI-related post on HN an excuse to vent about your own particular worldview and insult people who don't necessarily agree?
Platform support for AI has as much place in a browser as it does in Notepad. This isn't about being open-minded at all. I have written multiple MCPs, I use it daily, I am not in the crowd who "don't have an open mind." This outright non-feature is a significant source of issues, least of which is fingerprinting.
Make an AI browser extension. Done.
Shoving AI into anything where it can go is not having an open mind about things, it's nothing more shoving AI into anything where it can go.
On the inverse, can you provide a single reason why this API should exist which is isn't something that obviously erupted from an LLM? Again:
> Browsers and operating systems are increasingly expected to gain access to language models.
God help people if they have to copy their prompt from ChatGPT to Claude.
Only the weights and the RNG used to select tokens can answer that. You will understand much if you read up on the quality of code in the CC source leak, it's completely vibe coded and the printf fn is genuinely impossible for a human to comprehend.
Ligatures are a renderer issue, so using alacritty as a lib wouldn't have this issue (it does demonstrate their hardline stance). Another example that would translate is how long it took them to support disambiguation of key combinations: https://github.com/alacritty/alacritty/issues/6378 (2019-2023). Of course, the maintainers are free to do whatever they want with the project - but such things do make alacritty-as-a-lib an exceptionally bad choice for situations where you want things to just work.
Qwen-coder-next is considered SOTA for things you could actually run locally.
reply