There’s always a guy. It’s great that your favorite distro is definitely safer. An order of magnitude fewer exploits will mean only a few thousand or so, I suppose. Ozymandis used Gentoo.
FreeBSD is not a distro. It's not even Linux; it's a completely different kernel and operating system that traces back to even before Linux. It's honestly closer to Darwin than it is to Linux; macOS is technically a BSD. (Not FreeBSD though.)
So, to play Pandora, what if the net effect of uncovering all these unknown attack vectors is it actually empties the holsters of every national intelligence service around the world? Just an idea I have been playing with. Say it basically cleans up everything and everyone looking for exploits has to start from scratch except “scratch” is now a place where any useful piece of software has been fuzz tested, property tested and formally verified.
Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.
TBH this is a pretty good way of looking at it. Yeah we're seeing an explosion of vulnerabilities being found right now, but that (hopefully) means those vulnerabilities are all being cleaned up and we're entering a more hardened era of software. Minus the software packages that are being intentionally put out as exploits, of course. Maybe some might say it's too optimistic and naive, but I think you have a good point.
This is one force that operates. Another is that, in an effort to avoid depending on such a big attack surface, people are increasingly rolling their own code (with or without AI help) where they might previously have turned to an open source library.
I think the effect will generally be an increase in vulnerabilities, since the hand-rolled code hasn't had the same amount of time soaking in the real world as the equivalent OS library; there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did. But the vulnerabilities will have much narrower scope: If you successfully exploit an OS library, you can hack a large fraction of all the code that uses it, while if you successfully exploit FooCorp's hand-rolled implementation, you can only hack FooCorp. This changes the economic incentive of funding vulnerabilities to exploit -- though less now than in the past, when you couldn't just point an LLM at your target and tell it "plz hack".
I’m seeing a lot of similar things during code reviews of substantially LLM-produced codebases now. Half-baked bad idea that probably leaked from training sets.
Typically when hand-rolling code you implement only what you require for your use-case, while a library will be more general purpose. As a consequence of doing more, have more code and more bugs.
Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
For supply chain security and bug count, I'll take a focused custom implementation of specific features over a library full of generalized functionality.
Yes, a lot hinges on how little you can get away with implementing for your use case. If you have an XML config file with 3 settings in it, you probably won't need to implement handling of external entities the way a full XML parsing library would, which will close off an entire class of attendant vulnerabilities.
> Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
This isn't really an argument in favour of having the average programmer reimplement stuff, though. For it to be, you'd have to argue that the leftpad author was unusually sloppy. That may be true in this specific case, but in general, I'm not persuaded that the average OSS author is worse than the average programmer overall. IMHO, contributing your work to an OSS ecosystem is already a mild signal of competence.
On the wider topic of reimplementation: Recently there was an article here about how the latest Ubuntu includes a bunch of coreutils binaries that have been rewritten in Rust. It turns out that, while this presumably reduced the number of memory corruption bugs (there was still one, somehow; I didn't dig into it), it introduced a bunch of new vulnerabilities, mostly caused by creating race conditions between checking a filesystem path and using the path for something.
ETA: I'm not saying it has to, I'm saying it's possible to imagine reasons that would justify this decision in some cases.
Because it might grow in future and you want to allow flexibility for that, because it might be the input to or output from some external system that requires XML, because your team might have standardised on always using XML config files, because introducing yet another custom plain text file format just creates unnecessary cognitive load for everyone who has to use it are real-world reasons I can think of.
But really I was just looking for a concrete example where I know the complexity of the implementation has definitely caused vulnerabilities, whether or not the choice to use it to solve the problem at hand was sensible. I have zero love for XML.
Do you have a specific library in mind? I think it would have to be an ancient, unmaintained C library.
But I think most OSS code isn't like this -- even C code born long ago, if it's still in wide use, has been hardened by now. Examples: Linux kernel, GNU userland, PostgreSQL, Python.
> even C code born long ago, if it's still in wide use, has been hardened by now. Examples: Linux kernel
There have been two LPE vulnerability and exploits in the Linux kernel announced today. After the one announced just last week. I don't think as much of the C code born long ago has been as carefully hardened as you think.
(Copy Fail 2 and Dirty Frag today, and Copy Fail last week)
Sure, I didn't mean to say that these examples are guaranteed 100% safe -- just that I trust them to be enormously more safe than software that accomplishes the same task that was hand-written by either a human or an an LLM last week.
You are avoiding intentionally to say ‘thanks to LLMs’ or is implicit? As all these recent mega bugs surface with lots of fuzzing and agentic bashing, right ?
I think it will be an arms race in the future as well. Easier to fix known vulnerabilities automatically, but also easier to find new ones and the occasionally AI fuckup instead of the occasionally human fuckup.
Right now it kinda feels to me like "Open Source" is the Russian army, assuming their sheer numbers and their huge quantity of equipment much off which is decades old.
Meanwhile attackers and bug hunters are like the Ukrainians, using new, inexpensive, and surprisingly powerful tools that none of the Open Source community has ever seen in the past, and for which it has very little defence capability.
The attackers with cheap drones or LLMs are completely overwhelming the old school who perhaps didn't notice how quickly the world has changed around them, or did notice but cannot do anything about quickly enough.
Having casually read into a few recent incidents the vector has often been outside of software. A lot of mis-configurations or simply attacking the human in the chain. And nation states have basically unbounded resources for everything from bribes, insiders, and even standing up entire companies.
This assumes that there are no new exploits being generated.
We're seeing maintainers retreat from maintaining because the amount of AI slop being pushed at them is too much. How many are just going to hand over the maintenance burden to someone else, and how many of those new maintainers are going to be evil?
The essential problem is that our entire system of developing civilisation-critical software depends on the goodwill of a limited set of people to work for free and publish their work for everyone else to use. This was never sustainable, or even sensible, but because it was easy we based everything on it.
We need to solve the underlying problem: how to sustainably develop and maintain the software we need.
A large part of this is going to have to be: companies that use software to generate profits paying part of those profits towards the development and maintenance of that software. It just can't work any other way. How we do this is an open question that I have no answers for.
That is already how it works. The loner hacker in moms basement working for free on his super critical OSS package is largely a myth. The vast majority of OSS code is contributed by companies paying their employees to work on it.
this is a cornerstone of modern software development. If it died, or if got taken over by a malicious entity, every single company on the planet would have an immediate security problem. Yet the experience of that maintainer is bad verging on terrible [1].
>As an example, he put up a slide listing the 47 car brands that use curl in their products; he followed it with a slide listing the brands that contribute to curl. The second slide, needless to say, was empty.
>He emphasized that he has released curl under a free license, so there is no legal problem with what these companies are doing. But, he suggested, these companies might want to think a bit more about the future of the software they depend on.
There is little reason for minimal-restriction licenses to exist other than to allow corporate use without compensation or contribution. I would think by now that any hope that they would voluntarily be any less exploitative than they can would have been dashed.
If you aren't getting paid or working purely for your own benefit, use a protective license. Though, if thinly veiled license violation via LLM is allowed to stand, this won't be enough.
There is a lot of opposition in the FOSS community for restrictive/protective licenses. And to be fair, this comes from a consistent and entirely logical worldview.
There's a bunch of problems with getting companies to pay for this, too - that sense of entitlement (or even contractual obligation), the ability to control the project with cash, etc.
I don't have any answers or solutions. But I don't think we can hand-wave the problem away.
The problem is that they get away too easily with bugs in their products they ship to customers. If this would come with some penalties, there would be some incentive to invest in security and this would probably often flow back to upstream projects.
How so? We have open source operating systems running on a whole sleuth of systems ages apart. Interesting ideas and open collaboration coming out of the OS world.
This opposed to closed off “products” that change at the whims of the company owning it.
New software is being generated faster than it can be adequately tested. We are in the same place we’ve always been; except everything is moving much too fast.
This is exactly the feeling I have. First: excessive growth of dependencies fueled by free components.
* with internet access to FOSS via sourceforge and github we got an abundance of building blocks
* with central repositories like CPAN, npm, pip, cargo and docker those building blocks became trivially easy to use
Then LLMs and agents added velocity to building apps and producing yet more components, feeding back into the dependency chain. Worse: new code with unattributed reuse of questionable patterns found in unknowable versions of existing libraries. That is, implicit dependencies on fragments multitude of packages.
This may all end well ultimately, but we're definitely in for a bumpy ride.
This is related to a massive annoyance of mine: when I run a piece of software and the system is missing a required dependency, I want the software to *tell me* that dependency is missing so I can make a decision about proceeding or not. Instead it seems that far too often software authors will try and be “clever” by silently installing a bunch of dependencies, either in some directory path specific to the software, or even worse globally.
I run a distro that often causes software like this to break because their silent automatic installation typically makes assumptions about Linux systems which don’t apply to mine. However I fear for the many users of most typical distros (and other OS’ in general as it’s not just a Linux-only issue) who are subject to having all sorts of stuff foisted onto their system with little to no opportunity to easily decide what is being heaped upon them.
Ruby gems and CPAN have build scripts that rebuild stuff on the user's device (and warn you if they can't find a dependency). But I believe one of the Python's tools that started the trend of downloading binaries instead of building them. Or was it NPM?
What we are seeing so far come out of the AI agent era is reduced not increased code quality. The few advances are by far negated by all the slop that's thrown around and that's unlikely to change.
> any useful piece of software has been fuzz tested, property tested and formally verified.
That would require effort. Human effort and extra token cost. Not going to happen, people want to rather move fast an break things.
Where is your sense of adventure? If you do it to a DeLorean, you might wind up with infinite time. Plus, pretty much every local car show I've been to has a handful of DeLoreans that I'd assume the owners are probably over maintaining. Actually, scratch that, let's go into a 3D printing business for DeLorean replacement parts to get the money thing down.
Many DeLorean parts - famously excluding the left fender - are still available as NOS from the company that bought John DeLorean's factory.
In fact, there are still complete, servicable engines _and chassis_ available. And the chassis are already registered with a VIN, so when built can be sold as a new 1984 model year vehicle.
If I remember correctly, it was the next part to be manufactured en mass before the factory shut down. They'd do a few days of X part, another few days of Y part, etc, in rotation. Another part of the factory was assembling the complete vehicles from the parts in stock.
Electric motors don’t have torque curves. It’s all available right away. As a kid I remember reading in Wired about an electric car scene in California where they had to learn things the hard way and one guy’s maiden voyage ended still in his driveway with the backend split in two.
This is essentially the "area under the curve" argument. But it's been polluted to absurdum by Internet fanboys with an agenda so now everyone thinks EVs are some magical thing that don't abide by the laws of physics.
No amount of fanboy screeching is going to change the fact that it's only 200hp. Compared to a bone stock 70s/80s car that made 200-250hp from the factory this will 200hp EV will be a riot. But at $20k that's not what it's being compared against. The 500+HP LS crate motor and transmission combo (i.e. what this is being cross shopped against) are going to make more than that from ~2500rpm on up.
If you graph power available at a given output RPM with an electric motor you get a line. With an ICE you get an upward and then tapering off curve. When you add transmission gears to the ICE it's a series of essentially overlapping saw teeth except on the first gear where it goes all the way down to whatever power you make at 1500-2000rpm (so like a little under 100hp for a ~500hp engine, probably like 30hp for an ICE that makes ~200hp stock).
Basically even with a flat curve there comes a point where the taller curve is so much taller it still wins.
When comparing to cars of about the same horsepower the EVis gonna win every time, because flat curve. Even if comparing to a more powerful ICE car where the areas are approx. equal you don't have to pull back to shift (even CVTs "shift", it's for longevity reasons) and the ICE is probably not geared deep enough for best initial acceleration (though for "modern" power levels both cars have more than enough to roast the tires) the EV is still probably better.
And as an side I think it's dumb that they make you replace the transmission. There are tons and tons and tons of cars out there that either still have the original transmission or someone swapped an SBC into them in 19-whatever. Being able to just replace the engine would make the swap a ton more accessible because you don't have to also add transmission mounting, controls, driveshaft, etc. to the list. Most older transmissions can handle "muh EV torque" just fine. It's the shifting under torque they don't like.
Basically this is cool but I think it's too expensive for the specs it has.
Edit: Not calling you a stupid fanboy, just saying you've been mislead by them.
There will be torque multiplication by the transmission in 1st through 2nd so it won't be as much of a dog as you think. Race car, no. But it'll hold it's own in modern traffic unlike a lot of older cars.
Out of curiosity I looked up the ratios for the mentioned 4L60 transmission: 1st is 3.059:1, 2nd is 1.625:1, 3rd is direct drive at 1.00:1 and 4th is overdrive at 0.696:1. Then you'll have the ratio in your rear differential, whatever that happens to be.
My high school car was a 1975 Impala with the 350 cubic inch small block V8. Because of the Malaise Era emissions laws, it only produced 145hp but still had decent torque at 250ft·lb. It had a huge amount of space under the hood so perhaps this could fit both the motor and battery in there? (F/R weight balance being ignored)
Your point about people comparing this against the LS crate motor is correct IMO. This will be an expensive low-volume kit until (if!) economies of scale kick in. Only bought by people who want something different to show off to their friends at the weekend car shows.
The people who drive performance cars do. I never had a problem in my old Geo metro making 50hp - except when following a corvette - they always waiting until the end to accelerate and I needed the whole ramp to get up to speed. It works for them because they had enough power for that trick.
Being conparable to the original performance might be a feature on it's own.
Insurance companies don't care what mods you do to your car, even EV swapping, except performance mods. If you tell them you've been doing performance mods, they'll drop you.
> Edit: Not calling you a stupid fanboy, just saying you've been mislead by them.
No worries at all and I should have been clearer that I wasn't saying it was just as good, more that it wasn't "Oh well, 200hp" like a ICE engine. I also think raw horsepower is overrated in street driving. As a single data point, a couple of weeks ago I got to run three laps in a GTR "Godzilla" at Loudon on the interior track. It was a blast but after I'd come down off the high I realized that 585hp did not feel wildly different from the ~400hp in my Camaro. And I rarely get to use much of that (other than some of those lovely overly long onramps around here).
As somebody who used to race FB RX-7s and NA Miata, I can say with complete certainty that somewhere north of 120rwhp would have been nice. 200 in either car would have been a hoot, especially with the EV flat power curve. And in neither would I have wanted more than ~300hp because I have no need to go more than 150mph surrounded by other amateur car nerds. I gave up instructing because cars are just too darn fast - when I started, Miata and Rabbits and Civics were the norm, then came the E36 and E42 M3s, and then boost buggies, and then the C6, and NOPE NOPE NOPE I have a wife a mortgage don't need this anymore.
Oh he has those too. According to the article, his are comically oversized, even for the genre. Because of course they are. The poor dude just screams I WANT TO BELONG in a way the other children find repellent. Other than bullies who know the value of a useful fool. Weird.
I don't know either, I don't see the correlation with X and Musk either, as if he is the one developing the platform and not thousand of workers and leaders. What does the CEO of a platform has to do with what people post on it? The CEO of HN is responsible for what you just posted?
Kinda funny how people are selective about it, when you land on a website, you check who is in charge of it and for each CEO change you redo a decision? When you host your Postgres in the cloud, I hope you check as well who is in charge of Railway or Supabase, who knows? :/
There's only thing I find sadder than untouchable billionaires that never see any consequences for their actions: the people who think they need to stick up for them.
> What does the CEO of a platform has to do with what people post on it?
That CEO is actively promoting political viewpoints (via his account, his platform and his AI model) that are detrimental to my country and the way I want to live my life.
> When you land on a website, you check who is in charge of it and for each CEO change you redo a decision?
No. But if the CEO is very publicly a first-class a-hole, chances are I'll hear about it and I'll actively avoid doing business with them. That goes for the car dealership in my village, as well as the websites I interact with.
I'm not from the US so I don't really care, X is an international platform and almost all the content I see isn't US related (which kinda make me think that people should just set their account from outside of the US to just avoid this?), but from your point of view, it seems more of a disagreement of beliefs, wouldn't this reasoning apply for your beliefs as well? If the CEO of a certain platform was agreeing with your beliefs but 50% of the population don't, you are practically saying that people disagreering should boycott said platform, but isn't that how you just end discourse between people and create an echo-chamber?
It's more if you use it for things beyond traditional dev work. GitHub Actions have become very unstable plus someone using it at this level where people are trying to download/ file issues/ send code up 24/7 would feel the pain of every outage, not just those that happen during one's working hours.
reply