Hacker Newsnew | past | comments | ask | show | jobs | submit | nulltrace's commentslogin

Browsers already treat the same SVG differently depending on how you embed it. <img> strips scripts and external resource loads. <object> and inline don't. People test with img tags, looks fine, then someone switches the embed method and everything opens up.

it'd be nice if there was a way to declare in the URL that a given SVG could only be treated as an image so that you could safely open SVG urls, etc without exposing yourself to the dangers of embed/inline.

Couldn’t you do that using Content-Security-Policy?

If you control the domain then yes you could. But if I want to put a link on my website to some SVG hosted elsewhere and I want it to be safe for you to open that link in a new tab then there's not really a way for CSP to protect you the user from the host deploying a malicious SVG.

Like opening a PNG in a new tab is harmless but opening an SVG in a new tab is opening a pretty substantial can of worms.


If your threat model is “I don’t want the image I’m hotlinking to be replaced with something else when opened in a new tab”, then no image format is safe.

We added a preflight curl against registry.npmjs.org before the install step in CI. Not surprising they went down together.

Downtime is one thing. Silently reverting commits on your default branch is something else entirely.

Preview deploys are even worse. Every PR spins one up with the same env vars and nobody ever cleans them up. You rotate the key, redeploy prod, and there are still like 200 zombie previews sitting there with the old value.

Catching accidental drift is still worth a lot. It's basically the same idea as performance regression tests in CI, nobody writes those because they expect sabotage. It's for the boring stuff, like "oops, we bumped a dep and throughput dropped 15%".

If someone actually goes out of their way to bypass the check, that's a pretty different situation legally compared to just quietly shipping a cheaper quant anyway.


Also it's not just about running an obviously worse quant.

Running different GPU kernels / inference engines also matters. It's easy to write an implementation that is faster and thus cheaper but numerically much noisier / less accurate.


Yeah, the threat model is nonexistent. Most people use a dozen or so well known providers, who have no incentives to so obviously cheat.

Right, metaclass is a ways off. But even without it, just the core reflection is going to save a ton of boilerplate. Half the template tricks I've written for message parsing were basically hand-rolling what `^T` will just give you.

Rebalancing is what really kills you. A CAS loop on a flat list is pretty straightforward, you get it working and move on. But rotations? You've got threads mid-insert on nodes you're about to move around. It gets ugly fast. Skiplists just sidestep the whole thing since level assignment is basically a coin flip, nothing you need to keep consistent. Cache locality is worse, sure, but honestly on write-heavy paths I've never seen that be the actual bottleneck.

Yeah pricing seems okay with batching. The 128MB memory cap per Durable Object is what I'd watch. A repo with a few thousand files and some history could hit that faster than you'd expect, especially during delta resolution on push.

we do alot of work to optimize this on our side :)

Fair, but from the user side it still hurts. Setting up an Ed25519 signing context used to be maybe ten lines. Now you're constructing OSSL_PARAM arrays, looking up providers by string name, and hoping you got the key type right because nothing checks at compile time.

Yeah. Some of the more complex EVP interfaces from before and around the time of the forks had design flaws, and with PQC that problem is only going to grow. Capturing the semantics of complex modes is difficult, and maybe that figured into motivations. But OSSL_PARAMs on the frontend feels more like a punt than a solution, and to maintain API compatibility you still end up with all the same cruft in both the library and application, it's just more opaque and confusing figuring out which textual parameter names to use and not use, when to refactor, etc. You can't tag a string parameter key with __attribute__((deprecated)). With the module interface decoupled, and faster release cadence, exploring and iterating more strongly typed and structured EVP interfaces should be easier, I would think. That's what the forks seem to do. There are incompatibilities across BoringSSL, libressl, etc, but also cross pollination and communication, and over time interfaces are refined and unified.

Lockfiles help more than people realize. If you're pinned and not auto-updating deps, a package getting sold and backdoored won't hit you until you actually update.

The scarier case is Dependabot opening a "patch bump" PR that probably gets merged because everyone ignores minor version bumps.


I mitigate this using a latest -1 policy or minimum age policy depending upon exactly which dependency we're talking about. Combined with explicit hash pins where possible instead of mutable version tags, it's saved me from a few close calls already... Most notably last year's breach of TJ actions


I wish those PRs made by the bot can have a diff of the source code of those upgraded libraries (right in the PR, because even if in theory you could manually hunt down the diffs in the various tags...in practise nobody does it).


No need to hunt it down, there's a URL in the PR / commit message that links to the full diff.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: