Thus succeeding at making the telecommunications vendors used for Top Secret US national security data less secure, the obvious goal of the US National Security Agency, and the only reason they wouldn't use the better cryptography designed by Dr. Bernstein. /s
Truly, truly can't understand why anyone finds this line of reasoning plausible. (Before anyone yells Dual_EC_DRBG, that was a NOBUS backdoor, which is an argument against the NSA promoting mathematically broken cryptography, if anything.)
Timing side channels don't matter to ephemeral ML-KEM key exchanges, by the way. It's really hard to implement ML-KEM wrong. It's way easier to implement ECDH wrong, and remember that in this hypothetical you need to compare to P-256, not X25519, because US regulation compliance is the premise.
(I also think these days P-256 is fine, but that is a different argument.)
I genuinely do not understand how someone working in the capacity that you do, for things that matter universally for people, can contend that an organization who is intentionally engaging in NOBUS backdoors can be remotely trusted at all.
That is insanely irresponsible and genuinely concerning. I don't care if they have a magical ring that defies all laws of physics and assuredly prevents any adversary stealing the backdoor. If an organization is implementing _ANY_ backdoor, they are an adversary from a security perspective and their guidance should be treated as such.
The world just doesn’t work in such a binary way. Forming a mental model of an entity’s incentives, goals, capabilities, and dysfunctions will serve you much better than making two buckets for trusted parties and adversaries.
As you are someone building cryptographic libraries used by people all over the world, which includes those who might be seen as "enemies" by the organization in question, this is not a gradient — it's quite binary in nature.
Maybe your motives are benevolent, but you're arguing two things:
1) We can broadly trust the US government
2) We should adopt new encryption partly designed and funded by the US government, and get rid of the battle tested encryption that they seem not to be able to break
Forgive me for being somewhat suspicious of your motives here
> Thus succeeding at making the telecommunications vendors used for Top Secret US national security data less secure, the obvious goal of the US National Security Agency
NSA still has the secret Suite A system for their most sensitive information. If they think that is better than the current public algorithms and their goal is to make telecommunications vendors to have better encryption, then why doesn't they publish those so telco could use it?
> Truly, truly can't understand why anyone finds this line of reasoning plausible. (Before anyone yells Dual_EC_DRBG, that was a NOBUS backdoor, which is an argument against the NSA promoting mathematically broken cryptography, if anything.)
The NSA weakened DES against brute-force attack by reducing the key size (while making it stronger against differential cryptanalysis, though).
The thing that sets this effort apart from DES and Clipper is that USG actually has skin in the game. Neither DES or Clipper were ever intended or approved to protect classified information.
These are algorithms that NSA will use in real systems to protect information up to the TOP SECRET codeword level through programs such as CNSA 2.0[1] and CsFC.
> Thus succeeding at making the telecommunications vendors used for Top Secret US national security data less secure, the obvious goal of the US National Security Agency, and the only reason they wouldn't use the better cryptography designed by Dr. Bernstein. /s
I guess the NSA thinks they're the only one that can target such a side channel, unlike, say, a foreign government, which doesn't have access to the US Internet backbone, doesn't have as good mathematicians or programmers (in NSA opinion), etc.
> Timing side channels don't matter to ephemeral ML-KEM key exchanges, by the way. It's really hard to implement ML-KEM wrong. It's way easier to implement ECDH wrong, and remember that in this hypothetical you need to compare to P-256, not X25519, because US regulation compliance is the premise.
Except for KyberSlash (I was surprised when I looked at the bug's code, it's written very optimistically wrt what the compiler would produce...)
So do you think vendors will write good code within the deadlines between now and... 2029? I wouldn't bet my state secrets on that...
That's a timing side-channel, irrelevant to ephemeral key exchanges, and tbh if that's the worst that went wrong in a year and a half, I am very hopeful indeed.
> The industry standard and general recommendation for quantum resistant symmetric encryption is using 256 bit keys
It simply is not. NIST and BSI specifically recommend all of AES-128, AES-196, and AES-256 in their post-quantum guidance. All of my industry peers I have discussed this with agree that AES-128 is fine for post-quantum security. It's a LinkedIn meme at best, and a harmful one at that.
My opinion changed on the timeline of CRQC. There is no timeline in which CRQC are theorized to become a threat to symmetric encryption.
I don't think you said (or cited) what you think you said.
Leaving aside that you actually didn't cite a lattice attack paper, the "dual attack" on lattice cryptography is older than P-256 was when Curve25519 was adopted to replace it. It's a model attack, going all the way back to Regev. It is to MLKEM what algebraic attacks were (are?) to AES.
You know you're in trouble in these discussions when someone inevitably cites SIDH. SIDH has absolutely nothing to do with lattices; in fact, it has basically nothing to do with any other form of cryptography. It was a wildly novel approach that attracted lots of attention because it took a form that was pin-compatible with existing asymmetric encryption (unlike MLKEM, which provides only a KEM).
People who bring up SIDH in lattice discussions are counting on non-cryptography readers not to know that lattice cryptography is quite old and extremely well studied; it was a competitor to elliptic curves for the successor to RSA.
With that established: what exactly is the point you think those three links make in this discussion? What did you glean by reading those three papers?
He's obviously not saying that you can "trust blindly" any PQ algorithm out there, just that there are some that have appeared robust over many years of analysis.
He is assessing that the risk of seeing a quantum computer break dlog cryptography is stronger than the risk of having post quantum assumptions broken, in particular for lattices.
One can always debate but we have seen more post quantum assumptions break during the last 15 years than we have seen concrete progress in practical quantum factorisation (I'm not talking about the theory).
It's purely a matter of _potential_ issues. The research on lattice-based crypto is still young compared to EC/RSA. Side channels, hardware bugs, unexpected research breakthroughs all can happen.
And there are no downsides to adding regular classical encryption. The resulting secret will be at least as secure as the _most_ secure algorithm.
The overhead of additional signatures and keys is also not that large compared to regular ML-KEM secrets.
No it's not. This is the wrong argument. It's telling how many people trying to make a big stink out of non-hybrid PQC don't even get what the real argument is.
Perhaps you would care to enlighten us ignorant plebs rather than taunting us?
My understanding (obviously as a non expert) matches what cyberax wrote above. Is it not common wisdom that the pursuit of new and exciting crypto is an exercise filled with landmines? By that logic rushing to switch to the new shiny would appear to be extremely unwise.
I appreciate the points made in the article that the PQ algorithms aren't as new as they once were and that if you accept this new imminent deadline then ironing out the specification details for hybrid schemes might present the bigger downside between the two options.
I mean TBH I don't really get it. It seems like we (as a society or species or whatever) ought to be able to trivially toss a standard out the door that's just two other standards glued together. Do we really need a combinatoric explosion here? Shouldn't 1 (or maybe 2) concrete algorithm pairings be enough? But if the evidence at this point is to the contrary of our ability to do that then I get it. Sometimes our systems just aren't all that functional and we have to make the best of it.
"taunt" in the sense that you dangle some knowledge in front of people and make them beg, not "taunt" in the sense of "insult".
You said:
>"[...] don't even get what the real argument is."
and then refuse to explain what the "real" argument is. someone then asks for clarification and you say:
"It's definitely not [...]""
okay, cool! you are still refusing to explain what the "real" argument is. but at least we know one thing it isnt, i guess.
you haven't even addressed the "mistaken assertion". you just say "nah" and refuse to elaborate. which is fine, i guess. but holy moly is it ever frustrating to read some of your comment chains. it often appears that your sole goal in commenting is to try and dunk on people -- at least that is how many of your comments come across to me.
I was explicit about what the real argument isn't: the notion that lattice cryptography is under-studied compared to RSA/ECC.
I understand what your takeaway from this thread is, but my perspective is that the thread is a mix of people who actually work in this field and people who don't, both sides with equally strong opinions but not equally strong premises. The person I replied to literally followed up by saying they don't follow the space! Would you have assumed that from their preceding comment?
(Not to pick on them; acknowledging that limitation on their perspective was a stand-up move, and I appreciate it.)
You do "XYZ isn't the right argument, ABC is" on a thread like that, and the reply tends to be "well yeah that's what I meant, ABC is just a special case of XYZ". No thanks.
I'm not a professional cryptographer, but I _am_ really interested in opinions of experts in the field and I do have a lot of prior experience with crypto (the actual kind, not *coin). From my point of view, I just don't see what's the fuss is all about.
There's no shared understanding, just a snarky expert claiming (in effect) "I know better than all you simpletons but I'm not going to share". At best it's incredibly poor behavior. At worst it's the behavior of someone who doesn't actually have a defensible point to make.
As far as I know, the currently standardized lattice methods are not known to be vulnerable? And the biggest controversy seemed to be the push for inclusion of non-hybrid methods?
I'm not following crypto closely anymore, I stopped following the papers around 2014, right when learning-with-errors started becoming mainstream.
We can disagree on the tradeoff, but if you see no upside, you are missing the velocity cost of the specification work, the API design, and the implementation complexity. Plus the annoying but real social cost of all the bikeshedding and bickering.
All of those are costs are at least as high for non-hybrid. The spec and API are just as easy to design (because we have really good and simple ECC libraries), and the bikeshedding and bickering will be a lot less if people stop trying to force pure PQC algorithms that lots of people see as incredibly risky for incredibly little benefit.
> Sure, papers about an abacus and a dog are funny and can make you look smart and contrarian on forums. But that’s not the job, and those arguments betray a lack of expertise. As Scott Aaronson said:
> Once you understand quantum fault-tolerance, asking “so when are you going to factor 35 with Shor’s algorithm?” becomes sort of like asking the Manhattan Project physicists in 1943, “so when are you going to produce at least a small nuclear explosion?”
To summarize, the hard part of scalable quantum computation is error correction. Without it, you can't factorize essentially anything. Once you get any practical error correction, the distance between 32-bit RSA and 2048-bit RSA is small. Similarly to how the hard part is to cause a self-sustaining fissile chain reaction, and once you do making the bomb bigger is not the hard part.
This is what the experts know, and why they tell us of the timelines they do. We'd do better not to dismiss them by being smug about our layperson's understanding of their progress curve.
I’ve worked with Bas. I respect him, but he is definitely a QC maximalist in a way. At the very least he believes that caution suggests the public err on the side of believing we will build them.
The actual challenge is we still don’t know if we can build QC circuits that factorize faster than classical both because the amount of qubits has gone from ridiculously impossible to probably still impossible AND because we still don’t know how to build circuits that have enough qbits to break classical algorithms larger or faster than classical computers, which if you’re paying attention to the breathless reporting would give you a very skewed perception of where we’re at.
It’s also easy to deride your critics as just being contrarian on forums, but the same complaint happens to distract from the actual lack of real forward progress towards building a QC. We’ve made progress on all kinds of different things except for actually building a QC that can scale to actually solve non trivial problems . It’s the same critique as with fusion energy with the sole difference being that we actually understand how to build a fusion reactor, just not one that’s commercially viable yet, and fusion energy would be far more beneficial than a QC at least today.
There’s also the added challenge that crypto computers only have one real application currently which is as a weapon to break crypto. Other use cases are generally hand waved as “possible” but unclear they actually are (ie you can’t just take any NP problem and make it faster even if you had a compute and even traveling salesman is not known to be faster and even if it is it’s likely still not economical on a QC).
Speaking of experts, Bas is a cryptography expert with a specialty in QC algorithms, not an expert in building QC computers. Scott Aronson is also well respected but he also isn’t building QC machines, he’s a computer scientist who understands the computational theory, but that doesn’t make him better as a prognosticator if the entire field is off on a fool’s errand. It just means he’s better able to parse and explain the actual news coming from the field in context.
Don't recognise you from your username, but thanks for the respect. (Update: ah, Vitali! Nice to hear from you.)
If you look back at my writing from 2025 and earlier, I'm on the conservative end of Q-day estimates: 2035 or later. My primary concern then is that migrations take a lot of time: even 2035 is tight.
I'm certainly not an expert on building quantum computers, but what I hear from those that are worries me. Certainly there are open challenges for each approach, but that list is much shorter now than it was a few years ago. We're one breakthrough away from a CRQC.
For me presuming Q-day will happen which is why I categorize that more as a maximalist camp, same as people who believe AGI is inevitable are AI maximalists. I could also be misremembering our conversation, but I thought you had said something like 2029 or 2030 in our 2020 conversation :)?
My concern is that there's so much human and financial capital behind quantum computing that the "experts" have lots of reason to try to convince you that it's going to happen any day now. The cryptographic community is rightly scared by the potential because we don't have any theoretical basis to contradict that QC speedups aren't physically possible, but we also don't have any proof (existence or theoretical) that proves they are actually possible.
The same diagrams that are showing physical q-bits per year or physical qbits necessary to crack some algorithm are the same ones powering funding pitches and that's very dangerous to me - it's very possible it's a tail wagging the dog situation.
The negative evidence here for me is that all the QC supremacy claims to date have constantly evaporated as faster classical algorithms have been developed. This means the score is currently 0/N for a faster than classical QC. The other challenge is we don't know where BQP fits or if it even exists as a distinct class or if we just named a theoretical class of problems that doesn't actually exist as a distinct class. That doesn't get into the practical reality that layering more and more error correction doesn't matter so much when the entire system still decoheres at any number at all relevant for theoretically being able to solve non-trivial problems.
Should we prepare for QC on the cryptography side? I don't know but I'm still less < 10% chance that CRQC happens in the next 20 years. I also look at the other situation - if CRQC doesn't ever happen, we're paying a meaningful cost both in terms of human capital spent hardening systems against it and ongoing in terms of slowing down worldwide communications to protect against a harm that never materializes (not to mention all the funding burned spent chasing building the QC). The problem I'm concerned about is that there's no meaningful funding spent trying to crack whether BQP actually exists and what this complexity class actually looks like.
> I could also be misremembering our conversation, but I thought you had said something like 2029 or 2030 in our 2020 conversation
Think that must've been around 2022. It'd have been me mentioning 2030 regulatory deadlines. So far progress in PQC adoption has been mostly driven by (expected) compliance. Now it'll shift to a security issue again.
> My concern is that there's so much human and financial capital behind quantum computing that the "experts" have lots of reason to try to convince you that it's going to happen any day now.
There've been alarmist publications for years. If it were just some physicists again, I'd have been sceptical. This is the security folks at Google pulling the alarm (among others.)
> [B]ut we also don't have any proof (existence or theoretical) that proves they are actually possible.
The theoretic foundation is pretty basic quantum mechanics. It'd be a big surprise if there'd be a blocker there. What's left is the engineering. The problem is that definite proof means an actual quantum computer... which means it's already too late.
> The other challenge is we don't know where BQP fits
This is philosophy. Even P=NP doesn't imply cryptography is hopeless. If the concrete cost between using and breaking is large enough (even if it's not asymptotically) we can have perfectly secure systems. But this is quite a tangent.
> Should we prepare for QC on the cryptography side?
A 10% chance it happens by 2030, means we'll need to migrate by 2029.
> it and ongoing in terms of slowing down worldwide communications
We've been working hard to make the impact negligible. For key agreement the impact is very small. And with Merkle Tree Certificates we also make the overhead for authentication negligible.
The thing is, producing the right isotopes of uranium is mostly a linear process. It goes faster as you scale up of course, but each day a reactor produces a given amount. If you double the number of reactors you produce twice as much, etc.
There is no such equivalent for qubits or error correction. You can't say, we produce this much extra error correction per day so we will hit the target then and then.
There is also something weird in the graph in https://bas.westerbaan.name/notes/2026/04/02/factoring.html. That graph suggests that even with the best error correction in the graph, it is impossible to factor RSA-4 with less then 10^4 qubits. Which seems very odd. At the same time, Scott Aaronson wrote: "you actually can now factor 6- or 7-digit numbers with a QC". Which in the graph suggests that error rate must be very low already or quantum computers with an insane number of qubits exist.
We are stretching the metaphor thin, but surely the progress towards an atomic bomb was not measured only in uranium production, in the same way that the progress towards a QC is not measured only in construction time of the machine.
At the theory level, there were only theories, then a few breakthroughs, then some linear production time, then a big boom.
> Something doesn't add up here.
Please consider it might be your (and my) lack of expertise in the specific sub-field. (I do realize I am saying this on Hacker News.)
Not only, but a huge challenge was manufacturing enough fuel and was the real limiting part. They were working out hard science and engineering but more fuel definitely == bigger bomb in a very real way and it is quite linear because E=mc^2. And it was in many ways the bottleneck for the bombs - it literally guided how big they made the first bomb and the US manufactured enough for 3 - 1 test, 2 to drop
> That graph suggests that even with the best error correction in the graph, it is impossible to factor RSA-4 with less then 10^4 qubits. Which seems very odd.
It's because the plot is assuming the use of error correction even for the smallest cases. Error correction has minimum quantity and quality bars that you must clear in order for it to work at all, and most of the cost of breaking RSA4 is just clearing those bars. (You happen to be able to do RSA4 without error correction, as was done in 2001 [0], but it's kind of irrelevant because you need error correction to scale so results without it are on the wrong trendline. That's even more true for the annealing stuff Scott mentioned, which has absolutely no chance of scaling.)
You say you don't see the uranium piling up. Okay. Consider the historically reported lifetimes of classical bits stored using repetition codes on the UCSB->Google machines [1]. In 2014 the stored bit lived less than a second. In 2015 it lived less than a second. 2016? Less than a second. 2017? 2018? 2019? 2020? 2021? 2022? Yeah, less than a second. And this may not surprise you but yes, in 2023, it also lived less than a second. Then, in 2024... kaboom! It's living for hours [4].
You don't see the decreasing gate error rates [2]? The increasing capabilities [3]? The ever larger error correcting code demonstrations [4]? The front-loaded costs and exponential returns inherent to fault tolerance? TFA is absolutely correct: the time to start transitioning to PQC is now.
You can already factor a 6 digit number with a QC, but not with an algorithm that scales polynomially. The graph linked is for optimized variants of Shor's algorithm.
So today you have 1 gram. No bomb. Tomorrow you have 2 grams. Still no bomb.
...
365 days later, you have 365 grams after spending ungodly amounts of energy to separate isotopes. AND STILL NO BOMB! Not even a small one. These scientists are just some bullshit artists.
> Similarly to how the hard part is to cause a self-sustaining fissile chain reaction, and once you do making the bomb bigger is not the hard part.
I don't like this analogy very much, because in practice making a nuclear reaction is much, much easier than making a nuclear bomb. You don't need any kind of enrichment or anything, just a big enough pile of natural uranium and graphite [1].
Making a bomb on the other hand, required an insane amount of engineering: from doing isotope separation to enrich U235 to an absurd level (and / or, extract plutonium from the wastes of a nuclear reactor) to designing a way to concentrate a beyond critical mass of fissile element.
The Manhattan project isn't famous without reason, it was an unprecedented concerted effort that wouldn't have happened remotely as quickly in peacetime.
The Manhattan Project scientists actually did this before anybody broke ground at Los Alamos. It was called the Chicago Pile. And if the control rods were removed and the SCRAM disabled, it absolutely would have created a "small nuclear explosion" in the middle of a major university campus.
Given the level of hype and how long it's been going on, I think it's totally reasonable for the wider world to ask the quantum crypto-breaking people to build a Chicago Pile first.
In truth the Chicago Pile crowd were all about power generation and didn't think it was feasible to make a nuclear bomb ..
( Not impossible, more strictly "beyond reach" economically and processing wise, operating on over estimates of the effort and approach )
They ignored letters from Albet Einstein on the topic, they ignored or otherwise disregarded several letters from the Canadian / British MAUD Committee / Tube Alloys group and it took a personal visit from an Australian for them to sit up and take note that such a thing was actually within reach .. although it'd take some man power and a few challenges along the way.
Remember that the entities most likely to heed those governments recommendations are those providing services to said government and its military.
I feel like the NSA pushing a (definitely misguided and obviously later exploited by adversaries) NOBUS backdoor has poorly percolated into the collective consciousness, missing the NOBUS part entirely.
IMO the idea that NSA only uses NOBUS backdoors is obviously false (see for example DES's 56 bit key size). The NSA is perfectly capable of publicly calling for an insecure algorithm and then having secret documentation to not use it for anything important.
DES is the algorithms that was secretly modified by the NSA to protect it against differential cryptanalysis. Capping a key size is hardly a "backdoor."
Also, that was the time of export ciphers and Suite A vs Suite B, which were very explicit about there being different algorithms for US NatSec vs. everything else. This time there's only CNSA 2.0, which is pure ML-KEM and ML-DSA.
So no, there is no history of the NSA pushing non-NOBUS backdoors into NatSec algorithms.
In fairness, that was from 1975. I don't particularly trust the NSA, but i dont think things they did half a century ago is a great way to extrapolate their current interests.
This article is more aimed at those specifying and implementing WebAuthN and SSH, than at those using them.
They/we need to migrate those protocols to PQ now, so that you all can start migrating to PQ keys in time, including the long tail of users that will not rotate their keys and hardware the moment the new algorithms are supported.
For example, it might be too late to get anything into Debian for it to be in oldstable when the CRQCs come!
> This article is more aimed at those specifying and implementing WebAuthN and SSH, than at those using them.
Sure, I'm just trying to understand the consequences of that. Felt great to finally have secure elements on smartphones and laptops (or Yubikeys), protecting against the OS being compromised (i.e. "you access my OS, but at least you can't steal my keys").
I was wondering if PQ meant that when it becomes reality, we just get back to a world where if our OS is compromised, then our keys get compromised, too. Or if there is a middle ground in the threat model, e.g. "it's okay to keep using your Yubikey, because an attacker would need to have physical access to your key, specialised hardware AND access to a quantum computer in order to break it". Versus "you can stop bothering about security keys because with "store now, decrypt later", everything you do today with your security keys will anyway get broken with quantum computers eventually".
If you are doing authentication with those hardware keys, you will probably be fine, if we do our job fast enough. Apple's Secure Enclave already supports some PQ signatures (although annoyingly not ML-DSA-44 apparently?) and I trust Yubico is working on it.
If you are doing encryption, then you do have reason to worry, and there aren't great options right now. For example if you are using age you should switch to hybrid software ML-KEM-768 + hardware P-256 keys as soon as they are available (https://github.com/str4d/age-plugin-yubikey/pull/215). This might be a scenario in which hybrids provide some protection, so that an attacker will need to compromise both your OS and have a CRQC. In the meantime, depending on your threat model and the longevity of your secrets (and how easily they can rotated in 1-2 years), it might make sense to switch to software PQ keys.
I mean "your OS and have a CRQC" because they will need to compromise the software PQ key by compromising the OS, and derive the hardware YubiKey private key using the CRQC.
That was my position until last year, and pretty much a consensus in the industry.
What changed is that the new timeline might be so tight that (accounting for specification, rollout, and rotation time) the time to switch authentication has also come.
ML-KEM deployment is tangentially touched on in the article because it's both uncontroversial and underway, but:
> This is not the article I wanted to write. I’ve had a pending draft for months now explaining we should ship PQ key exchange now, but take the time we still have to adapt protocols to larger signatures, because they were all designed with the assumption that signatures are cheap. That other article is now wrong, alas: we don’t have the time if we need to be finished by 2029 instead of 2035.
> For key exchange, the migration to ML-KEM is going well enough but: 1. Any non-PQ key exchange should now be considered a potential active compromise, worthy of warning the user like OpenSSH does, because it’s very hard to make sure all secrets transmitted over the connection or encrypted in the file have a shorter shelf life than three years. [...]
You comment is essentially the premise of the other article.
I agree with you that one must prepare for the transition to post-quantum signatures, so that when it becomes necessary the transition can be done immediately.
However that does not mean that the switch should really be done as soon as it is possible, because it would add unnecessary overhead.
This could be done by distributing a set of post-quantum certificates, while continuing to allow the use of the existing certificates. When necessary, the classic certificates could be revoked immediately.
> I agree with you that one must prepare for the transition to post-quantum signatures, so that when it becomes necessary the transition can be done immediately.
Personally, my reading between the lines on this subject as a non-expert is that we in the public might not know when post-quantum cryptography is necessary until quite a while after it is necessary.
Prior to the public-key cryptography revolution, the state of the art in cryptography was locked inside state agencies. Since then, public cryptographic research has been ahead or even with state work. One obvious tell was all the attempts to force privately-operated cryptographic schemes to open doors to the government via e.g. the Clipper chip and other appeals to magical key escrow.
A whole generation of cryptographers grew up in this world. Quantum cryptography might change things back. We know what papers say from Google and other companies. Who knows what is happening inside the NSA or military facilities?
It seems that with quantum cryptography we are back to physics, and the government does secret physics projects really well. This paragraph really stood out to me:
> Scott Aaronson tells us that the “clearest warning that [he] can offer in public right now about the urgency of migrating to post-quantum cryptosystems” is a vague parallel with how nuclear fission research stopped happening in public between 1939 and 1940.
Couldn't NSA have not known about an issue with ML-KEM, and thus wanted to prevent its commercial acceptance, which it did simply by approving the algorithm?
What's the PQC construction you couldn't say either thing about?
> Couldn't NSA have not known about an issue with ML-KEM, and thus wanted to prevent its commercial acceptance, which it did simply by approving the algorithm?
Could, but they did not do that. So, the question is to be stated: Why?
As a practical matter, revocation on the Web is handled mostly by centrally distributed revocation lists (CRLsets, CRLite, etc. [0]), so all you really need is:
(1) A PQ-secure way of getting the CRLs to the browser vendors.
(2) a PQ-secure update channel.
Neither of these require broad scale deployment.
However, the more serious problem is that if you have a setting where most servers do not have PQ certificates, then disabling the non-PQ certificates means that lots of servers can't do secure connections at all. This obviously causes a lot of breakage and, depending on the actual vulnerability of the non-PQ algorithms, might not be good for security either, especially if people fall back to insecure HTTP.
Indeed, in an open system like the WebPKI it's fine in theory to only make the central authority PQ, but then you have the ecosystem adoption issue. In a closed system, you don't have the adoption issue, but the benefit to making only the central authority PQ is likely to be a lot smaller, because it might actually be the only authority. In both cases, you need to start moving now and gain little from trying to time the switchover.
> In both cases, you need to start moving now and gain little from trying to time the switchover.
There are a number of "you"s here, including:
- The SDOs specifying the algorithms (IETF mostly)
- CABF adding the algorithms to the Baseline Requirements so they can be used in the WebPKI
- The HSM vendors adding support for the algorithms
- CAs adding PQ roots
- Browsers accepting them
- Sites deploying them
This is a very long supply line and the earlier players do indeed need to make progress. I'm less sure how helpful it is for individual sites to add PQ certificates right now. As long as clients will still accept non-PQ algorithms for those sites, there isn't much security benefit so most of what you are doing is getting some experience for when you really need it. There are obvious performance reasons not to actually have most of your handshakes use PQ certificates until you really have to.
Yeah, that's an audience mismatch, this article is for "us." End users of cryptography, including website operators and passkey users (https://news.ycombinator.com/item?id=47664744) can't do much right now, because "we" still need to finish our side.
I do not understand the fixation on authentication/signatures. They have different threat characteristics:
You cannot retroactively forge historical authentication sessions, and future forgery ability does not compromise past data, and it only matters for long-lived signed artifacts (certificates, legal documents, etc.), yet the thread apparently keeps pivoting to signature deployment complexity?
The argument is that deploying PQ-authentication mechanisms takes time. If the authenticity of some connections (firmware signatures, etc…) is critical to you and news comes out that (")cheap(") quantum attacks are going to materialize in six months, but you need at least twelve months to migrate, you are screwed.
There is also a difference between closed ecosystems and systems that are composed of components by many different vendors and suppliers. If you are Google, securing the connection between data centers on different continents requires only trivial coordination. If you are an industrial IoT operator, you require dozens of suppliers to flock around a shared solution. And for comparison, in the space of operation technology ("OT"), there are still operators that choose RSA for new setups, because that is what they know best. Change happens in a glacial pace there.
You can't do software updates securely, but it strikes me that compromising the revocation process is a good thing. Suppose you can use a key to sign a message saying "stop using this". If someone else breaks that key and falsely signs that message, what are the downsides?
You revoke a cert because you lose control of it; if someone else can falsely revoke that cert, doesn't that truthfully send the exact same signal? That you lose control of it?
Can you explain a bit more what you mean by "secure" in the context of "actual revocations"? The oxymoronic nature isn't self-evident enough for me to catch your intended meaning before my first cup of coffee.
How can you falsely revoke a certificate? If an attacker can revoke a certificate, either by falsifying the signature or possessing the necessary key material, it is by definition not a trustworthy certificate anymore, and the revocation is therefore correct.
In the public CA PKI, it is the CA which has the power to revoke their issued certificates. In other systems, it can be the private key for the certificate itself. In either case, the certificate is not to be trusted anymore.
Revocation is the least of your worries should your signature algorithm be broken in the future.
If you don't have the private key on hand to issue a revocation, your next best bet is to find a parser bug that convinces some subset of user agents that the valid certificate you don't hold the private key for is actually invalid. (Hence, a false revocation.)
And then, get those users into the habit of accepting invalid/revoked certificates if they want to access the site. And then after weeks of battling against their patience or endurance, then you offer an invalid cert for a MitM.
If you receive a forged crl, in the worst case it will revoke certificates that you can't trust anyway. Even if it says "certificate X is still good", that's equivalent to receiving no crl.
Perhaps it's already necessary, or it will be in the following months. We are hearing only about the public developments, not whatever classified work the US is doing
I think the analogy with the Manhattan project is apt. The US has enormous interest in decrypting communication streams at scale (see Snowden and the Utah NSA datacenter), and it's known for storing encrypted comms for decrypting later. Well maybe later is now
If you want something book-shaped, the 2nd edition of Serious Cryptography is updated to when the NIST standards were near-final drafts, and has a nice chapter on post-quantum cryptography.
If you want something that includes details on how they were deployed, I'm afraid that's all very recent and I don't have good references.
That’s a fun list, the only hits in the top 100 are actually Cloudflare, for whom automatic DNSSEC is a feature, and would be a bad look not to dogfood it.
(I did a lot of the work of shipping that product in a past life. We had to fight the protocol and sometimes the implementers to beat it into something deployable. I am proud of that work from a technical point of view, but I agree DNSSEC adds little systemic value and haven’t thought about it since moving on from that project almost 10 years ago. It doesn’t look like DNSSEC itself has changed since, either.)
Then a few government sites, which have mandated it. The first hit after those is around #150.
Truly, truly can't understand why anyone finds this line of reasoning plausible. (Before anyone yells Dual_EC_DRBG, that was a NOBUS backdoor, which is an argument against the NSA promoting mathematically broken cryptography, if anything.)
Timing side channels don't matter to ephemeral ML-KEM key exchanges, by the way. It's really hard to implement ML-KEM wrong. It's way easier to implement ECDH wrong, and remember that in this hypothetical you need to compare to P-256, not X25519, because US regulation compliance is the premise.
(I also think these days P-256 is fine, but that is a different argument.)
reply