So, in addition to (E)fficiency and (P)erformance cores, it now seems pretty inevitable we'll be seeing (S)ecurity cores soon? Clocked at, say, 25% (or possibly 250%, since that makes them more difficult to observe, remotely?) of the lowest truly-constant rate available on the chip and designed/routed in such a way that side-channels are less attractive to exploit?
My bet is that they’ll lock down more and more registers for reading those values in the first place this is what they’ve done with previous power and frequency related side channel attacks.
See also the closely related "DVFS Frequently Leaks Secrets: Hertzbleed Attacks Beyond SIKE, Cryptography, and CPU-Only Data," presented at Oakland last week: https://www.hertzbleed.com/2h2b.pdf
The real solution is to stop executing untrusted code that was just downloaded from some random place on the internet. You can't both simultaneously optimize a runtime and secure it for untrusted execution. You have to execute it in a de-optimized manner to avoid any sidechannel leakage. We'll keep discovering these 'bugs' until we do what we did to Java Applets.
I hate most usages of javascript, and I want to be running only opensource components that I trust. So I should agree with you. But that doesn't sound realistic to me, even in dreams.
Javascript runs surprisingly well without optimization. I think it was the Chromium based version of Edge, of all browsers, that was the first to offer doing so as a security pitch.
General purpose computers are wonderful because they can run so many diverse applications. If you get rid of “untrusted code”, people will still make thousands of apps be “trusted” on their machines. There is no more-secure past we can return to.
That past seems to be left for the Linux desktop, given that Microsoft already signaled at Blue Hat IL 2023 that development certificates will be coming into Windows, maybe by Windows 12 time frame, similar to notarized apps on macOS, or the whole mobile ecosystem.
What Linux desktop? The only way Linux will be available on readily available personal computers is via WSL, aka Other OS for PCs... and just as likely to be taken away once it threatens Microsoft's security posture/bottom line.
General purpose computing is dead. The only way to reasonably assure lack of self-foot-shooting by end users is a mandatory signed code path from power on right down to application level software.
I'd say Raspberry Pi is an alternative, but the Raspberry Pi Foundation is committing an epic, Commodore-style self-own right now and the future of that platform is in jeopardy. Orange Pi and the like are nonstarters.
The point about Linux Desktop was that is the only place left for those that won't dance to the music of Apple, Microsoft and Google's operating systems.
How much it will grow beyond 1%? Not much, if at all.
I see their smart app control plan 100% in the corporate space. For the home user space I wonder how many will just "opt-out to desktop mode" like they've been doing with every other option until this point.
So they do not want to ban apps but rather ban the developers. This opens the ocean of abuse of small developers. Thank you very fucking much MS. Hopefully it will fail.
Luckily, like many of the sensor based side-channel attacks, the information rate is extremely slow (0.125bps nominal rate for pixel sniffing) and the machine has to start in a known calibrated state.
Not to diminish the work, it's an excellent lab experiment, but this has practically zero real-world application.
At 0.125 bps you reduce the pool of possible users by half every 8 seconds. If your existing fingerprinting results in a pool of 100,000 possible users, you can identify the exact user if they leave the page open for 2 minutes and 13 seconds.
The problem here is not exfiltrating data via bizarre side channels. It's the availability of the data. Because it's the old and tired "visited link color" problem all over again.
Can browser vendors just stop honoring the a:visited style completely? Or at least in iframes and other sources that don't arrive with the original page? That could initially be a configuration flag, like it was with rejecting third-party cookies.
I don't think that the impact of never coloring visited links differently would be noticeable for 99.99% users.
for me at least this is very useful on search results like on Google when i try to find again something i found before but until now not needed it.
maybe better would be to limit styling just to color and when programmatically checked return value that should be when this class would be not present
I think it would make sense to track this with a separate database for every host domain. Either that or have a user-only rendering layer which applies the colors, but which isn't visible to any scripts that are running.
Hiding the result of rendering from scripts by default is indeed reasonable, and is being done (see "tainted canvas").
This exploit avoids lookig directly at pixels, because the pixels are not available. It runs an SVG filter on these pixels, without looking at the result of filtering; it just measures how much time / energy did it take to render.