Most likely yes. There are a lot enterprises out there that only trust paid subscriptions.
Paying for something “secure” comes with the benefit of risk mitigation - we paid X to give us a secure version of Y, hence its not our fault “bad thing” happenned.
Counterpoint: most likely no, it really is about all the downstream impacts of critical and high findings in scanners. The risk of failing a soc2 audit for example. Once that risk is removed then the value prop is also removed.
F500s trust the paid subscriptions because it means you can escalate the issue -- you're now a paying client so you get support if/when things explode -- and that also gives you a lever to shift blame or ensure compliance.
I recall being an infra lead at an Big Company that you've heard of and having to spend a month working with procurement to get like 6 Mirantis / Docker licenses to do a CCPA compliance project.
I don't think this is the case here. The reason you want to lower your CVEs is to say "we're compliant" or "it's not our fault a bad thing happened, we use hardened images". Paying doesn't really change that - your SOC2 doesn't ask how much you spent, it asks what your patching policy is. This makes that checkbox free.
We’re being told AI will “augment” workers, but it increasingly looks like it’s augmenting them right out of a job. Meanwhile, Microsoft’s stock is near record highs. Shareholders win, employees lose.
There's no contradiction there, and the companies aren't even directly lying - they're saying that with AI each worker can do more. It then stands to reason that unless you're in a rapidly growing business, then you're going to need fewer workers.
It’s ironic: for years the open-source community was trying to match GPT-3 (175B dense) with 30B–70B models + RLHF + synthetic data—and the performance gap persisted.
Turns out, size really did matter, at least at the base model level. Only with the release of truly massive dense (405B) or high-activation MoE models (DeepSeek V3, DBRX, etc) did we start seeing GPT-4-level reasoning emerge outside closed labs.
> Welch was just very good at understanding that some invisible boundaries didn’t apply anymore, and that the zeitgeist was shifting in that direction
Agreed, Welch didn’t invent shareholder primacy, but he industrialized it. What makes him so consequential isn’t that he played the game better, but that he normalized a playbook that treated human capital as expendable
This is a textbook case of micro-architectural reality beats theoretical elegance. It's fascinating how replacing 5 loads with 2 loads + 3 vextq_f32 intrinsics, which should reduce memory pressure, ends up being slower due to execution port contention and dependency chains.