Open Source has worked fine here. The author doesn't find financial support for the work, so they just want to change winds and that's a perfectly fine path forward.
If this is really much more than a personal project "for fun, on my leisure time", and it became an actually serious product-level project that provides good value in commercial environments for people, there's clearly an opportunity for a for-profit company to step in and cover that niche. But that'd require that users became customers and actually departed from their money to pay for it :)
I guess most will switch instead to asking who's the next project maintainer to work on it, to whom the new bug reports and complaints can continue to be sent for free. But if there's money to be made by using a tool, there should be money paid for using it too. We "just" need to find the new generation of FOSS Financial Sustainability solutions that actually work! Donations don't make the cut.
The effort to setup donations is almost always more trouble than the donations that result are worth. Better spent looking for a job, or working on a commercial project that will make money. People simply don't donate to open source projects at a level that matters.
I've been working on Open Source software for 30+ years. There's no money in it, if your idea for making money is "accept donations". I don't like it, but it's a fact. If you want to make money, you have to make something that isn't free (and even then, if you give away the most valuable parts, as in "open core" licensing, you probably still won't make enough money to make the development worth it).
When I was young and driven by idealism and optimism, I assumed that with enough users I'd be able to ring the cash register somehow. Turns out not so much. We got the users, the money never came. There are a few outliers, but there probably aren't a lot of opportunities to found a Red Hat today.
Second this. There's no money in donations. Also the target demographics for donations is individuals, who rarely donate and are kind of desensitized to the whole thing at this stage (as everyone and their mother asks for donations).
Companies need to jump through legal and accounting loopholes to donate, they very much prefer a simple purchase, which is nice! But setting up actual purchases is a whole different ordeal with open source, now the question is why is the company paying for something that's free?
Source: my own 5-stars open source project with 500k+ active users that paid for 3 coffees in total over 10+ years. I still get like $2 sometimes after a long while.
The most annoying thing is the people who demand most loudly that you setup donations don't actually donate once you go to the trouble to do so. We had a guy make an issue about it in github, and followed up over and over...we finally did it. Nothing. I guess they think making demands is helping.
I wonder whether the author has considered taking the product to a paid level and what would be necessary for it.
Obviously, all contributors have some form of copyright, which may or may not have been waived depending on whether there was an ACL in place and jurisdiction. So he would need to get permission from the copyright holders, maybe in exchange for a percentage of the profit.
Changing the license of already existing code? You might not be able to do that without permission from other contributors, I agree.
But it's MIT license. We can open a company tomorrow, take that code, and start selling it. Further development and improvements of the code could be trivially done openly or behind closed doors. FWIW the author themselves could do that if they wanted.
I've been working on a software package I'm hoping to release in a few months... I'm really torn on either split FLOSS with commercial extensions, or just going fully private... I was planning on a pretty generous free tier, but hoping to make a bit on the side from commercial customers.
It's a bit of a niche as it is, so that's going to be rough in any kind of pricing model, as a large part of that niche is either homebrew types and the other commercial industry that will likely require some more integrations and customization.
You could dual license as well, so it’s GPL or AGPL for personal, OSS, or academic use, but requires a paid for commercial license for commercial use.
I suggest GPL or AGPL because their copyleft clauses make them hostile towards platform providers who might otherwise seek to profit from your work without paying.
Yeah, but the copyleft makes anything they build around it a derivative work that they also have to release sources for - especially with AGPL. Most don’t want to do that because that’s where their IP lives.
Not all open source licenses are copyleft licenses (e.g., MIT very much isn’t), but at the very least copyleft licenses make it much harder to exploit open source code commercially without giving back in some way, whether that’s code, or cash for a commercial license.
Not perfect, by any means, but definitely an improvement over more permissive licenses.
I am aware of how much I’m starting to sound a bit like RMS in my old age.
I wholeheartedly agree. Licensing is a complex topic of which I've read a good deal, and even within the Open Source communities there are usually a lot of misconceptions, so I like chiming in with less commonly pointed but very practical effects of it all, in case it helps someone to learn a tiny bit that day.
In this case the provider would of course have to comply with the AGPL and release their modifications as you mention, but it's important to note that No FOSS license protects at all against, for example, just offering the code as a service. It's the exact reason why Mongodb changed licenses and then a stream of commercial products started to change into "Source-Available" licenses in the recent past.
It would be dual license effectively... the base version AGPL and the Commercial version with additional functionality. Though I'd considered BSL and alternatives... and as mentioned, just closed/commercial only.
Here is a simple trick: do accept plenty of open source contributions as-is, without any kind of copyright assignment nor requiring to sign anything that grants power to relicense.
There you go, guaranteed community ownership of the code, best face and "good will" as promised by choosing a FOSS license to begin with, and future rug pulls averted.
Seeing it from the other side of the fence: if you see that all contributors are required to cede controlling power into a single hand (except certain Foundations, yadda yadda), it's not proper Open Source in spirit, only in form; and closeups are just a change of mind away.
Unlike Android indeed, when you maintain a perfectly working phone that happens (by accident or force of nature) to live longer than the official lifetime some executives in a remote office had decided to grant it, the web browser cannot be updated any more. Just the single most security sensitive piece of software of any computer. Who would have guessed people were going to complain!
And neither does Google. The latest version of Chrome requires the version of Android released in 2019. The latest version of iOS supports my iPad released in 2019.
I would always go to the official docs page for the needs I have, and use their HTTP library (or any other). It removes decision making, having to ensure good quality practices from lesser known libraries, and risks of supply chain attacks (assuming the root stdlib of a language would have more attention to detail and security than any random 3rd-party library thrown into github by a small group of unpaid devs)
Only when it falls short on my needs, I would drop the stdlib and go in dearch of a good quality, reputable, and reliable 3rd-party lib (which is easier said than done).
Has worked me well with Go and Python. I would enjoy the same with Rust. Or at a minimum, a list of libraries officialy curated and directly pointed at by the lang docs.
I just grabbed an Android remaster of "Broken Sword: Shadow of the Templars", a 90's point-and-click adventure that has been added a hints system which pops up automatically after a timeout of the player not progressing.
This can be set as far as 1h of being stuck. Can also be 5 minutes. But by default it is 30 seconds.
My inner kid was screaming "that's cheating!" :-D but on second thought it is a very cool feature for us busy adults, however it's sad the extremes that gamedevs have to go in order to appease the short-term mindless consumers of today's tik-toks.
But more seriously, where's the joy of generating long-standing memories of being stuck for a while on a puzzle that will make you remember that scene for 30 years? An iconic experience that separates this genre from just being an animated movie with more steps.
I couldn't imagine "Monkey Island II but every 30 seconds we push you forward". Gimme that monkey wrench.
TFA and this comment just made me have this thought about today's pace of consumption, work, and even gaming.
> went closed source and started injecting adware into checkout pages ... [and] geolocation tracking.
Maybe we should resort to blame and shame publicly this sort of actions. DDoS their servers, fill their inbox with spam, review-bomb anything they do. Public court justice a la 4chan trolling. Selling out is a lawful decision, of course, but there is no reason it shouldn't come with a price tag of becoming publicly hated. In fact, it might help people who are on the verge to stay on the ethical side of things (very ironically).
I'm just kinda joking (but wouldn't hate it if I was rugpulled and the person that did it got such treatment)
Calm down, just spreading the word that the extension is adware and having everyone uninstall it is sufficient to demonstrate that this move was a mistake. Trying to ruin someone's life is going completely overboard. Repercussions should be proportionate, you don't shoot people for stealing a candy bar.
Why not? Sincere question. As a very superficial idea, if we go back to the drawing board, for example we could decide our new cool concept of address to be an IPv4 + an hex suffix, maybe at the expense of not having a humongous address space.
So 10.20.30.40 would be an IPv4 address, and 10.20.30.40:fa:be:4c:9d could be an IPv6 address. With the :00:00:00:00 suffix being equivalent to the IPv4 version.
I just made this up, so I'm sure that a couple years of deep thought by a council of scientists and engineers could come up with something even better.
- Where in the bit pattern IPv4 mapped addresses should go
- Coming up with some variation of NAT64, NAT464, or similar concepts to communicate between/over IPv4 and IPv6 networks
- Blaming the optional extensions/features of IPv6 for being too complex and then inventing something which has 90% of the same parts which are actually required to use
It's even easy to get distracted in a world of "what you can do with IPv6" instead of just using the basics. The things that actually make IPv6 adoption slow are:
- A change in the size of the address field which requires special changes and configuration in network gear, operating systems, and apps because it's not just one protocol to think about the transport of again until the migration is 100% complete.
If IPv4 were more painfully broken then the switch would have happened long ago. People just don't care to move fast because they don't need to. IPv6 itself is fine though and, ironically, it's the ones getting the most value out of the optional extensions (such as cellular providers) who actually started to drive IPv6 adoption.
The header of an IPv4 packet has the source and destination addresses, both as 32-bit values. These fields are adjacent, and there's other stuff next to them. If you appended more bytes to the source address, routers would think that those new bytes are the destination address. This would not be backward compatible.
Interestingly, what you're describing really is similar to how many languages represent an IPv4 address internally. Go embeds IPv4 addresses inside of IPv6 structs as ::ffff:{IPv4 address}: https://cs.opensource.google/go/go/+/go1.26.2:src/net/ip.go;...
That's not a language-specific thing, but is actually part of the IPv6 RFCs as IPv4-mapped IPv6 addresses: [1], [2]
This is super useful because (at least on Linux) IPv6 sockets per default are dual-stack and bind to both IPv6 and IPv6 (except if you are using the IPV6_V6ONLY sockopt or a sysctl), so you don't need to open and handle IPv4 and IPv6 sockets separately (well, maybe some extra code for logging/checking properly with the actual IPv4 address).
That is also documented in ipv6(7):
IPv4 connections can be handled with the v6 API by using
v4-mapped-on-v6 address type; thus a program needs to support only
this API type to support both protocols. This is handled
transparently by the address handling functions in the C library.
IPv4 and IPv6 share the local port space. When you get an IPv4
connection or packet to an IPv6 socket, its source address will be
mapped to v6.
In IPv4 you only need to transmit IPv4 addresses. If the "cannot be" in parent post is referring to the exact byte disposition in packets, then I go the other way around to claim that I agree. Because the only way that a UTF8 character can pretend to be ASCII is because ASCII didn't use all of the 8 bits in a byte to begin with. Only way to have something similar in this case, would be that IPv4 didn't use all of the allocated bits for addresses... Which is not the case.
What I argued was that IPv4 could be embedded into IPv6 address space if they had designed for it. But I agree, that the actual packet header layouts would need to look at least a bit different.
> What I argued was that IPv4 could be embedded into IPv6 address space if they had designed for it.
Like:
> Addresses in this group consist of an 80-bit prefix of zeros, the next 16 bits are ones, and the remaining, least-significant 32 bits contain the IPv4 address. For example, ::ffff:192.0.2.128 represents the IPv4 address 192.0.2.128. A previous format, called "IPv4-compatible IPv6 address", was ::192.0.2.128; however, this method is deprecated.[5]
They did that. Problem is that an ipv4 only host can't talk to ipv6. Adding more bits to ipv4 creates a new protocol just like ipv6 and has the same transition issues.
The protocol field in the ipv4 header seems like a reasonable choice. A value would be associated for ipv6 and if that value is chosen then additional header data follows the ipv4 header.
That's similar to, but not exactly what we were discussing. In particular 6in4 has a full ipv6 header after the ipv4 header, but here the suggestion was instead that supplementary infomation would follow. For example, the most significant address bits could be stored in the ipv4 header and the least significant ones in the new part.
That's not meaningfully different. It would just amount to a slightly less redundant representation of the same data -- the steps needed to deploy it would be the same, and you'd still have all the same issues of v4 hosts not understanding your format.
Not really reasonable. That would
1) Make routing inefficient because routers have parse an additional, non-adjacent, non-contiguous header to get the source and destination addresses. 2) Break compatibility because there would exist "routers" that do not understand ipv6 headers. They receive your ipv4 with v6 packet and send it somewhere else.
The result is basically the same situation we are in today, except much more hacky. You'd still have to do a bunch of upgrades.
> So 10.20.30.40 would be an IPv4 address, and 10.20.30.40:fa:be:4c:9d could be an IPv6 address. With the :00:00:00:00 suffix being equivalent to the IPv4 version.
Like
> Addresses in this group consist of an 80-bit prefix of zeros, the next 16 bits are ones, and the remaining, least-significant 32 bits contain the IPv4 address. For example, ::ffff:192.0.2.128 represents the IPv4 address 192.0.2.128. A previous format, called "IPv4-compatible IPv6 address", was ::192.0.2.128; however, this method is deprecated.[5]
> For any 32-bit global IPv4 address that is assigned to a host, a 48-bit 6to4 IPv6 prefix can be constructed for use by that host (and if applicable the network behind it) by appending the IPv4 address to 2002::/16.
> For example, the global IPv4 address 192.0.2.4 has the corresponding 6to4 prefix 2002:c000:0204::/48. This gives a prefix length of 48 bits, which leaves room for a 16-bit subnet field and 64 bit host addresses within the subnets.
So you have to ship new code to every 'network element' to support your "IPv4+" plan. Just like with IPv6.
So you have to update DNS to create new resource record types ("A" is hard-coded to 32-bits) to support the new longer addresses, and have all user-land code start asking for, using, and understanding the new record replies. Just like with IPv6. (A lot of legacy code did not have room in data structures for multiple reply types: sure you'd get the "A" but unless you updated the code to get the "A+" address (for "IPv4+" addresses) you could never get to the longer with address… just like IPv6 needed code updates to recognize AAAA, otherwise you were A-only.)
You need to update socket APIs to hold new data structures for longer addresses so your app can tell the kernel to send packets to the new addresses. Just like with IPv6. In any 'address extension' plan the legacy code cannot use the new address space; you have to:
* update the IP stack (like with IPv6)
* tell applications about new DNS records (like IPv6)
* set up translation layers for legacy-only code to reach extended-only destination (like IPv6 with DNS64/NAT64, CLAT, etc)
You're updating the exact same code paths in both the "IPv4+" and IPv6 scenarios: dual-stack, DNS, socket address structures, dealing with legacy-only code that is never touched to deal with the larger address space.
Deploying the new "IPv4+" code will take time, there will partial deployment of IPv4+ is no different than having partial deployment of IPv6: you have islands of it and have to fall back to the 'legacy' IPv4-plain protocol when the new protocol fails to connect:
"Just adding more bits" means updating a whole bunch of code (routers, firewalls, DNS, APIs, userland, etc) to handle the new data structures. There is no "just": it's the same work for IPv6 as with any other idea.
(This idea of "just add more addresses" comes up in every discussion of IPv6, and people do not bother thinking about what needs to change to "just" do it.)
> If IPv4 were more painfully broken then the switch would have happened long ago.
IPv4 is very painful for people not in the US or Western Europe that (a) were now there early enough to get in on the IPv4 address land rush, or (b) don't have enough money to buy as many IPv4 addresses as they need (assuming someone wants to sell them).
So a lot of areas of the world have switched, it's just that you're perhaps in a privileged demographic and are blind to it.
> IPv4 is very painful for people not in the US or Western Europe that (a) were now there early enough to get in on the IPv4 address land rush, or (b) don't have enough money to buy as many IPv4 addresses as they need (assuming someone wants to sell them).
The lack of pain is not really about the US & Western Europe have plenty of addresses or something of that nature, it's that alternative answers such as NAT and CG-NAT (i.e. double NAT where the carrier uses non-public ranges for the consumer connections) deployments are still growing faster in those regions than IPv6 adoption when excluding cellular networks (they've been pretty good about adopting IPv6 and are where most of the IPv6 traffic in those regions comes from).
I think your summary is really great. One of the better refutations I've seen about the "what about v4 but longer??" question.
However, I think people do get tripped up by the paradigm shift from DHCP -> SLAAC. That's not something that is an inevitable consequence of increasing address size. And compared to other details (e.g. the switch to multicasting, NDP, etc.), it's a change that's very visible to all operators and really changes how things work at a conceptual level.
The real friction with SLAAC was that certain people (particularly some at Google) tried to force it as the only option on users, not that IPv6 ever forced it as the only option. The same kind of thing would likely occur with any new IP version rolling out.
SLAAC isn't something that is an inevitable consequence of increasing address size, it's something that is a useful advantage of increasing address size. Almost no one had big enough blocks in IPv4 where "just choose a random address and as long as no else seems to be currently claiming it it is yours" was a viable strategy for assigning an address.
There are some nice benefits of SLAAC over DHCP such as modest privacy: if device addresses are randomized they become harder to guess/scan; if there's not a central server with a registration list of every device even more so (the first S, Stateless). That's a great potential win for general consumers and a far better privacy strategy than NAT44 accidental (and somewhat broken) privacy screening. It's at odds with corporate device management strategies where top-down assignment "needs to be the rule" and device privacy is potentially a risk, but that doesn't make SLAAC a bad idea as it just increases the obvious realization that consumer needs and big corporate needs are both very different styles of sub-networks of the internet and they are conflicting a bit. (Also those conflicting interests are why consumer equipment is leading the vanguard to IPv6 and corporate equipment is languishing behind in command-and-control IPv4 enclaves.)
> Furthermore, DHCPv6 holds you back from various desirable things like privacy addresses and (arguably even more importantly) IPv6 Mostly.
Why would DHCPv6 hold back privacy addresses? Can't DHCPv6 servers generate random host address bits and assign them in DHCP Offer packets? Couldn't clients generate random addresses and put them in Request packets?
DHCPv6 temporary addresses have the same properties as SLAAC
temporary addresses (see Section 4.6). On the other hand, the
properties of DHCPv6 non-temporary addresses typically depend on the
specific DHCPv6 server software being employed. Recent releases of
most popular DHCPv6 server software typically lease random addresses
with a similar lease time as that of IPv4. Thus, these addresses can
be considered to be "stable, semantically opaque". [DHCPv6-IID]
specifies an algorithm that can be employed by DHCPv6 servers to
generate "stable, semantically opaque" addresses.
How does DHCPv6 hold back IPv6-mostly? First, most clients will send out a DHCPv4 request in case IPv4 is the only option, in which case IPv6-mostly can be signalled:
I was unaware of this, so thanks. Sounds like it addresses (pun intended) my concern.
> How does DHCPv6 hold back IPv6-mostly? First, most clients will send out a DHCPv4 request in case IPv4 is the only option, in which case IPv6-mostly can be signalled
If this is really much more than a personal project "for fun, on my leisure time", and it became an actually serious product-level project that provides good value in commercial environments for people, there's clearly an opportunity for a for-profit company to step in and cover that niche. But that'd require that users became customers and actually departed from their money to pay for it :)
I guess most will switch instead to asking who's the next project maintainer to work on it, to whom the new bug reports and complaints can continue to be sent for free. But if there's money to be made by using a tool, there should be money paid for using it too. We "just" need to find the new generation of FOSS Financial Sustainability solutions that actually work! Donations don't make the cut.
reply