Hacker Newsnew | past | comments | ask | show | jobs | submit | Deeplybrassic's commentslogin

Propably would have to get license from the government for it, which would invite a ton more surveillance on you


I think a memory is not just repetition, but collision of different paths that lead to permanent memories


It's odd, because it very much resembles relational memory for me. I can summon no memory just by it's name, only from it's association. The space for sole base values is very limited. The plus side is that i can generally remember every caveat and detail there is to doing a thing once I've seen it once.


>it is far better to have one really good piece of software than 20 half baked ones all doing roughly 80% of the whole.

I have to disagree on this one. Multiple diverging projects create _Stability_, whereas a single project creates _fragility_. One bad step on the only project, and it will be all who suffer


Try to provide a package of a GUI app "for linux". After you support centos, fedora, debian, ubuntu, gentoo and arch, you'll maintain a mess of brittle packaging scripts for years, breaking at each release and some updates. Just making the first release for each is a challenge. Even app images, the least well integrated system will fail at some point.

You'll see how stable is diversity.

Then package for windows xp. It probably works on windows vista, 7, 8 and 10.

Now I do see a lot of value in diversity. Resiliance, ethics, competition, collaboration...

Stability isn't one.


If you're actually interested in packaging a generic Linux GUI app, check out snapd. It's originally an Ubuntu project, but has been pretty widely adopted at this point. I believe every distro you mentioned can run snaps, except maybe Gentoo.

https://snapcraft.io/


Snapd has it's own set of problems.

It is sandboxed, so your use case must be compatible.

It uses apparmor, what if the system uses SEL ?

It's sandboxed in a certain way, your app must support it.

It's very recent, only the latest distribs have a snapd release that works well.

It's slower to execute.

Snapd assumes systemd.

But wait, did you say snapd ? I though you said flatpack. Or appimages.

Anyway, despite all that, it is easier to write a snapd instead of a deb+rpm+whatever. I actually like this project a lot.

And again it proves my point: to get stability, we use snapd, a tool to compensate for diversity.


Do developers not know how to statically compile software anymore?

Also, you shouldn't be writing your own packaging scripts: leave those to distribution packagers. There are thousands of people who work on packaging these things, and the user of the distribution is much safer if they don't touch software distributed by random people.

This stuff is completely stable, just don't break the assumptions of the system without knowing what you're doing.


Depending on distributors means that your users will only use ages old versions of your software, thus you receive support requests for stuffprobably already fixed/changed. Sending users back to distros also takes time.

And for static linking: That is kind of happening with those modern snap and flatpack and whatever systems for handling applications, but it's bad for security. I want to update my system zlib in case there is an issue instead if depending on all applications consuming compressed unteusted streams updating their package in time. And I certainly don't want each little tool bundling their own Qt.


Oh, but you still need to make a deb, rpm, nix, whatnot then compose with the os tooling, conventions and expectations. And use those to provide your conf/update/failure, menu integration, permission system, logging, init system, notifications, etc.

Or provide an app image and ignore most integration.

This is my point exactly: to get stability, you remove diversity.

Static linking, one packaging system, and you don't have to care about how diverse the universe it.

But it also means you don't benefit from what make those differences add: security updates, dependancy graph, automation, signing, jails, user documentation and training, os integration, native window theming, etc.


Oh, but you still need to make a deb, rpm, nix, whatnot then compose with the os tooling, conventions and expectations. And use those to provide your conf/update/failure, menu integration, permission system, logging, init system, notifications, etc.

This shouldn't need to be done? The distribution's packagers handle most of this. Except for Nix, maybe? I hear they have a particularly fucked up ecosystem for packaging.


No.

The chances your project matches the criterias to be included into any distrib repo are very low, and it's a lot of work and problems by itself.

They are not app stores you pay to get into. The gate keepers have a very strict opinion on what to let in and how. And it's all done manually, and fedora policy is not debian's is not gentoo's.

Plus, statically linked projects are almost never accepted. Back to square one.

Besides, what if it's not free software ?


> The distribution's packagers handle most of this.

And often fuck it up and introduce bugs or even vulnerabilities, intentionally ignore the developer, or simply fail to update packages in a reasonable time frame.

Relying on free labor to package software for you is a terrible terrible idea that helps keep Linux Desktop a shitshow.


Disagree strongly and as a user I feel distributions and package maintainers are a necessary defense against overly opinionated developers.

I'm glad there is a layer there that will patch and configure to better integrate into the system and in some (very rare) cases remove user hostile "functionality".


Sometimes it's better to trust the developers, not the packagers. Example is the Debian SSH vulnerability from 2008: https://www.debian.org/security/2008/dsa-1576

This was a bug introduced in packaging. See https://lwn.net/Articles/281436/ for more details.


I might agree with you in theory, but in the specific instance, there were never really two distinct projects. They were artificially maintained as two distinct projects to have an arbitrary “enterprise” project and the community-supported project.

The relationship between FreeNAS and TrueNAS is more like the difference between Fedora and RHEL. Both sponsored by the same company, but the later is the more mature, “enterprise” version.

So instead of having FreeNAS and TrueNAS we’re going to have “TrueNAS Core” and “TrueNAS Enterprise”.

It’s honestly not that big of a shift from a practical perspective.


> like the difference between Fedora and RHEL

That sounds more like RHEL vs CentOS, really. ...which is fitting, since those two have also been slowly coming closer and closer to each other.


> I have to disagree on this one. Multiple diverging projects create _Stability_, whereas a single project creates _fragility_. One bad step on the only project, and it will be all who suffer

Anti-fragility is a lot more complicated then just having a lot of implementations.

The moving away from CVS and SVN to much more easily distributed revision controls is one of the best things that ever happened to large open source projects.

The reason for this is that when its very easy to loose contributors and users to forks then it enforces a lot of project management discipline on the part of the project leadership. Before when you held all the keys to the castle and it was difficult to move away it was very tempting for people to use their position to impose "political" restraints on other people.

And vastly reduced hosting costs thanks to things like github, gitlab, spread of cloud providers and so on and so forth makes it now cheaper then ever before.

And these things makes it easier to 'unfork' as well.

In this way we have the odd result of easy forking has a way of making it so that forking isn't necessary.

And when there is a major dispute in a particular community then cheap and easy forking (and recombining) means that people can actually have competing governance models and see which approach is actually better. Rather then just fighting until everybody gets burned out and abandons the project.

Lede vs Openwrt is a good example of this.

Libre Office vs Open Office.

Gnome vs Unity.

These things exist more due to competing governance models then anything else.

So this can be summarized as saying "improvements in anti-fragility in modern large open source projects is more due to the fluidity in which projects can be managed, forked, and recombined rather then just the number of implementations users and contributors can choose from"


Ok, one or two. A couple. But not the tower of Babel that we have today. No need for a monoculture but no need for forks for forks' sake either. Or complete re-implementations of stuff that already works because it is cool to be maintainer of a project, rather than collaborator. In the end it is all about having something that is viable and that can compete on merits with closed source.


> In the end it is all about having something that is viable and that can compete on merits with closed source.

I thought it was about scratching your own itch? "I want to be a maintainer" kinda sounds like an itch.


Even funnier, forks over social identity/ideological differences


Not even software exists in only a vacuum, it's influenced by real world events. There are lots of people who see some of these changes as a threat to the existence of the software they use and want to secure it themselves


That depends on how many people are involved. Enough people work on desktops that more is better. However for more obscure things merging projects can be the difference between two projects that are almost dead and one project that shows signs of life.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: