Nearly all ISPs these days are deploying IPv6 for their mobile networks and core service networks, especially in less developed markets^1. The reason is simple, a cost justification. What doesn't exist is a cost justification for Enterprises to deploy IPv6, and for ISPs to deploy Residential / Corporate Internet IPv6.
IMO with the right market conditions, IPv6 could spread really fast within 6-24 months. For example, most cloud providers are now charging for IPv4 addresses when IPv6 is free. Small changes like that push in the right direction.
I don't even know why clouds offer public IP addresses. In my opinion all clouds should only have a gateway that routes via host header for millions of customers. IPv4 should be a special priv for special situations at a higher price. Then these clouds could own maybe 20 IPs total instead of millions.
> In my opinion all clouds should only have a gateway that routes via host header for millions of customers.
This is incompatible with TCP/IP networking. In TCP connections, (sender_address, sender_port, receiver_address, receiver_port) is a unique combination. Those numbers together uniquely identify the sender talking to the receiver. For a public webserver:
* sender_address is the client machine's IP address
* sender_port is a random number from 0..65535 (not quite, but let's pretend)
* receiver_address is the webserver's IP address
* receiver_port is 443
That means it'd be impossible for one client IP to be connected to one server IP more than 65535 times. Sounds like a lot, right?
* sender_address is the outbound NAT at an office with 10,000 employees
Now each user can have at most 6.5 connections on average to the same webserver. That's probably not an issue, as long as the site isn't a major news org and nothing critical is happening. Now given your scheme:
* receiver_address is the gateway shared by 10000 websites
Now each user can have at most 6.5 connections to all of those 10000 websites combined, at once, total, period. Or put another way, 100,000,000 client/website combos would have to fit into the same 65535 possible sender_ports. Hope you don't plan on checking your webmail and buying airline tickets at the same time.
This is actually a good point. I guess 20 IPs per cloud infra company is probably too few. But maybe these cloud companies can have 20k IPs instead of 2 million?
I find the whole OVH web control panel atrocious. It's so buggy I couldn't even have my account deleted, not even after contacting their customer support (they just told me to fix it myself using their APIs...).
Imagine if one day someone came up with a "better" way to chew food, but you had to learn how to do a super complex jaw movement and it wouldn't work in restaurants. It has no obvious benefit to you. The only motivation is that a small group of obsessively passionate (but not in a good way) people say at some unknown point in the future food won't be edible anymore.
IPv6 just tried to do too much so it failed at everything. Putting letters in IP addresses made it near impossible to remember what your network settings were supposed to be.
It is nothing short of a miracle that devices can even get IPv6 addresses. SLAAC was supposed to replace DHCP, but it couldn't provide DNS server addresses. DHCPv6 was introduced to replace SLAAC, but this time they forgot to add a way to communicate a default route. This lead to Cisco, Microsoft, and Google all taking completely different approaches, and the IETF helpfully blocking any efforts at cross vendor standardization because of v6 zealots.
Meanwhile, everybody else is using a plastic skull that they carry around with them to pre-chew the inedible food, which is the majority of the food you can get these days.
"It's not inedible," they say. "Just let me get my skull out."
> IPv6 just tried to do too much so it failed at everything. Putting letters in IP addresses made it near impossible to remember what your network settings were supposed to be.
People said the same sort of thing about v4: that it was hard to configure because you needed to know four separate addresses (IP, netmask, default route AND the DNS server) and if you mix up any of these it doesn't work.
As it turns out, in both cases it's just a lack of familiarity, not actual difficulty. The super complex jaw movement is just a regular bite, but you puff your cheeks out a bit. Or er, something.
> This lead to Cisco, Microsoft, and Google all taking completely different approaches [...] but this time they forgot to add a way to communicate a default route
"There should only be one way to do things... wait, no, not like that."
> I've been using Duplicati to sync a lot of data to S3's cheapest tape-based long term storage tier.
There are actually a lot of cheaper S3-compatible services out there, (like Backblaze B2, or Cloudflare R2). They pricing may work out to just backup to these directly. Certainly gives you far more control than Backblaze Backup.
Stripe do this in a cool way. Their REST API is version based on date, and each time they change it they add a stackable compatibility layer. So your decade old code will still work.
Ceph overheads aren't that large for a small cluster, but they grow as you add more hosts, drives, and more storage. Probably the main gotcha is that you're (ideally) writing your data three times on different machines, which is going to lead to a large overhead compared with local storage.
Most resource requirements for Ceph assume you're going for a decently sized cluster, not something homelab sized.
Arguably, "comma as a separator" is close enough to comma's usage in (many) written languages that it makes it easier for less technical users to interact with CSV.
https://www.openssh.com/legacy.html - Legacy algorithms in OpenSSH, which explains a little what they do. Then there is also your Identity key that you authenticate yourself with, which is placed in the servers authorized_keys.
Usually smooth. But if you're running a production workload definitely do your prep work. Working and tested backups, upgrade one node at a time and test, read release notes, wait for a week after major releases, etc. If you don't have a second node I highly recommend it, Proxmox can do ZFS replication for fast live migrations without shared storage.
Unfortunately clustered storage is just a hard problem, and there is a lack of good implementations. OCFS2 and GFS2 exist, but IIRC there are challenges for using them for VM storage, especially for snapshots. Proxmox 9 added a new feature to use multiple QCOW2 files as a volume chain, which may improve this, but for now that's only used for LVM. (Making Proxmox 9 much more viable on a shared iSCSI/FC LUN).
If your requirements are flexible Proxmox does have one nice alternative though - local ZFS + scheduled replication. This feature performs ZFS snapshots + ZFS send every few minutes, giving you snapshots on your other nodes. This snapshot can be used for manual HA, auto HA, and even for fast live migration. Not great for databases, but a decent alternative for homelab and small business.
IMO with the right market conditions, IPv6 could spread really fast within 6-24 months. For example, most cloud providers are now charging for IPv4 addresses when IPv6 is free. Small changes like that push in the right direction.
^1 https://www.theregister.com/2025/08/04/asia_in_brief/
reply