Sure, without exploits they can steal your api keys, read your personal data, and access your browser data. With exploits they can update packages on your computer too.
How would that help? Unless you happen to check the dotfiles git diff before running _anything_. I guess this could be put in prompt or some cron job to detect diffs but I bet absolutely nobody does this.
Endless ways, which is why I do not understand why sudo is ever used anymore, especially in production.
You do not need root to do anything in Linux these days anyway between Namespaces and Capabilities so there is really no reason for root to be accessible at all or have any processes running as root post boot.
> Malware can make a fake unprivileged sudo that sniffs your password.
Not on my Linux workstation though. No sudo command installed. Not a single setuid binary. Not even su. So basically only root can use su and nobody else.
Only way to log in at root is either by going to tty2 (but then the root password is 30 characters long, on purpose, to be sure I don't ever enter it, so login from tty2 ain't really an option) or by login in from another computer, using a Yubikey (no password login allowed). That other computer is on a dedicated LAN (a physical LAN, not a VLAN) that exists only for the purpose of allowing root to ssh in (yes, I do allow root to SSH in: but only with using U2F/Yubikey... I have to as it's the only real way to log in as root).
It is what it is and this being HN people are going to bitch that it's bad, insecure, inconvenient (people typically love convenience at the expense of security), etc. but I've been using basically that setup since years. When I need to really be root (which is really not often), I use a tiny laptop on my desk that serves as a poor admin's console (but over SSH and only with a Yubikey, so it'd be quite a feat to attack that).
Funnily enough last time I logged in as root (from the laptop) was to implement the workaround to blacklist all the modules for copy.fail/dirtyfrag.
That laptop doesn't even have any Wifi driver installed. No graphical interface. It's minimal. It's got a SSH client, a firewall (and so does the workstation) and that's basically it. As it's on a separate physical LAN, no other machine can see it on the network.
I did set that up just because I could. Turns out it's fully usable so I kept using it.
Now of course I've got servers, VMs, containers, etc. at home too (and on dedicated servers): that's another topic. But on my main workstation a sudo replacement function won't trick me.
> Realistically if you have installed malware, you need to do a full wipe of your computer anyway
You might be the exception to this sentiment. But out of curiosity, after all that setup would you feel confident trying to recover from malware (rather than taking the “nuke it from orbit” approach?).
In my case I use QubesOS so sudo is useless even if present since every security domain is isolated by hypervisor.
For servers, sudo or a package manager etc should not exist. There is no good reason for servers to run any processes as root or have any way to reach root. Servers should generally be immutable appliances.
FYI, in English the phrase "since years" is grammatically incorrect and sounds unnatural to a native speaker's ears. The correct phrase would be "I've been using that setup for years."
Thanks for sharing this, that seems like a very cool setup. I have a very old good-for-almost-nothing laptop that would be perfect for this, might just have to copy you!
Anyone wanna do a quick offline MVP on a general vision assistant for the blind? We've had things like Google Lens for a while, but it's a bit vision and touchscreen-centric.
While we are bragging, stagex was the first to hit 100% full source bootstrapped deterministic and hermetic builds last year and the first to make multiple signed reproductions by different maintainers on their own hardware mandatory for every release.
Debian has come along way, but when Debian says reproducible they mean they grab third party binaries to build theirs. When we say reproducible we mean 100% bootstrapped from source code all the way through the entire software supply chain.
Guix did a full source bootstrap first, credit where well due, but it does not apply to their whole tree. E.g haskell is bootstrapped with a binary, qemu includes binary firmware blobs, etc.
Guix is not fully bootstrapped or reproducible.
To your point though, the incomplete efforts of many other distros absolutely accelerated us.
Unfortunately, the term “reproducible” can be interpreted in many ways because there is no strict and complete definition. People and projects bend it to their liking.
Centralized proprietary software on on proprietary platforms can always be opted into a special update that makes all the private keys deterministic making end to end encryption useless for anyone with knowledge of that targeted backdoor.
Only FOSS can deliver verifiable E2EE, and all centralized and proprietary solutions like Zoom, Whatsapp, Instagram, etc should end the security theater.
I applaud Meta for at least being honest about one product.
While I agree reproducible builds are a huge part of the answer, if you get your builds from Google Play or the App Store you have no idea if anyone has reproduced the particular build that was served to your device.
A solution to this would be independent reproducible builds like F-Droid does, but Moxie rejected this citing it would cause them to lose control of the platform and install metrics Google and Apple provide. Always thought that was a weird position for a privacy tool.
Any community that cares could then at least make the right choice of client for their community. The masses never care, but what matters is that privacy is actually a choice.
At the risk of being pedantic, that's not exactly what the principle says. It's claim is that a cryptosystem should be secure even if everything about the system except the private key is public knowledge. It doesn't require that the system be public, only that the security of a non-public system shouldn't rely on it's non-public nature. A closed source cryptosystem designed to still be secure even if someone discovers how it works satisfies the principle just fine.
It's an even simpler user experience to just publicly publish all private information.
Can you imaging, I wouldn't even need to give my social security number to another org manually again. Anyone could just look it up. It would make things so easy for everyone.
It's a trade off. If someone wanted they could keep reducing security to improve the user experience, but a product having bad security will be problematic.
>Anyone could just look it up.
Most people's SSNs have already been leaked or stolen so it's just security theater to pretend they are still private information.
pnpm is even worse. There is no way to bootstrap it without binary blobs making it an easy target supply chain attack waiting to happen that could hide in plain sight indefinitely.
You should probably caveat any post you make about security concerns with that, so people can more easily judge whether your concerns line up with their threat model.
With supply chain attacks in the news daily now wreaking havoc across the whole industry, ignoring them is negligent in all cases where software is written for the consumption of anyone other than the author.
The entire medical industry was negligent for 100 years following Ignaz Semmelweis proving basic sanitation tactics would save countless lives.
Similarly the entire software industry is and has been negligent since 1987 when Ken Thompson first demonstrated basic supply chain integrity tactics could stop otherwise unstoppable and undetectable attacks on software.
(The buyers are the NSA, the IDF, Cellebrite, NSO and its successor corporation and that kind of thing. Depends on what you are offering)
You'll learn who the buyers are if you routinely have the really good stuff to sell! If you are offering iOS zero click on a semi-regular basis, the buyer is going to want to try to deal with you directly and preferably offer you a more regular form of employment, if you are interested. Some national governments may offer certain benefits to you, depending on your situation.
All depends on what you have to offer. If you were able to offer this https://arstechnica.com/security/2025/09/microsofts-entra-id... or something of that magnitude, a lot of problems in your life would just go away. The buyers would all be Five Eyes and the intelligence gain of having that kind of access even briefly is priceless.
In a more Western-centric context, imagine if you had a flaw like that, same 'no logs are generated' and 'every single customer account is accessible' but the impacted vendor was Alibaba Cloud. The researcher would get to name their price. That's the real world, that's the world we share. We shouldn't be blind to that.
Unfortunately this is correct. As a security researcher I set millions in profit on fire for reporting vulns to projects that offer no bounties vs selling to highest bidder. I keep doing it because it is the right thing to do, but I would not blame someone that needs to feed their family making a different choice.
We must get public funds to reward ethical disclosure of big impact vulns like this.
Harder and harder to get good policy like what you describe when tech-adjacent people loudly argue for criminal penalties for anything other than coordinated disclosure :(
reply