Hacker Newsnew | past | comments | ask | show | jobs | submit | keeperofdakeys's commentslogin

Nearly all ISPs these days are deploying IPv6 for their mobile networks and core service networks, especially in less developed markets^1. The reason is simple, a cost justification. What doesn't exist is a cost justification for Enterprises to deploy IPv6, and for ISPs to deploy Residential / Corporate Internet IPv6.

IMO with the right market conditions, IPv6 could spread really fast within 6-24 months. For example, most cloud providers are now charging for IPv4 addresses when IPv6 is free. Small changes like that push in the right direction.

^1 https://www.theregister.com/2025/08/04/asia_in_brief/


Hetzner makes you pay 1 € per IPv4, while IPv6 is free. I'd gladly get rid of all IPv4's given that I have many servers.

But their VLANs still only support ipv4, which makes it hard to route external ipv6 traffic through the VLANs. You need tunnels.

I don't even know why clouds offer public IP addresses. In my opinion all clouds should only have a gateway that routes via host header for millions of customers. IPv4 should be a special priv for special situations at a higher price. Then these clouds could own maybe 20 IPs total instead of millions.

> In my opinion all clouds should only have a gateway that routes via host header for millions of customers.

This is incompatible with TCP/IP networking. In TCP connections, (sender_address, sender_port, receiver_address, receiver_port) is a unique combination. Those numbers together uniquely identify the sender talking to the receiver. For a public webserver:

* sender_address is the client machine's IP address

* sender_port is a random number from 0..65535 (not quite, but let's pretend)

* receiver_address is the webserver's IP address

* receiver_port is 443

That means it'd be impossible for one client IP to be connected to one server IP more than 65535 times. Sounds like a lot, right?

* sender_address is the outbound NAT at an office with 10,000 employees

Now each user can have at most 6.5 connections on average to the same webserver. That's probably not an issue, as long as the site isn't a major news org and nothing critical is happening. Now given your scheme:

* receiver_address is the gateway shared by 10000 websites

Now each user can have at most 6.5 connections to all of those 10000 websites combined, at once, total, period. Or put another way, 100,000,000 client/website combos would have to fit into the same 65535 possible sender_ports. Hope you don't plan on checking your webmail and buying airline tickets at the same time.


This is actually a good point. I guess 20 IPs per cloud infra company is probably too few. But maybe these cloud companies can have 20k IPs instead of 2 million?

If you multiply by 20 shared addresses, it would be 130 connections to 200000 websites.

> host header

Not all workloads are HTTP.

> gateway .. for millions of customers

That's basically what an AWS ALB is. It's not provisioning bespoke infrastructure when you create it.. it's just a routing rule in their shared infra.

If Amazon wanted, they could easily have shared IP's but the cost of an IPv4 isn't so great that this approach has been warranted yet, clearly.


Yeah I get all that, but the only two connection types that are useful are http/s/ and ssh. SSH can have work-arounds like the way google does.

Let's let the people that want non http workloads pay more.


Remember that, at one stage, the only two types that were useful were FTP and telnet. HTTP and SSH didn't even exist.

Let's not strangle the next big thing that doesn't exist yet before it can even be born, yeah?


The next big thing can happen on IPv6

But only if you don't hide everybody away behind routers that require HTTP and a host header.

OVH does the same, but only gives you a /128. Which is ridiculously shitty of them.

I find the whole OVH web control panel atrocious. It's so buggy I couldn't even have my account deleted, not even after contacting their customer support (they just told me to fix it myself using their APIs...).

Imagine if one day someone came up with a "better" way to chew food, but you had to learn how to do a super complex jaw movement and it wouldn't work in restaurants. It has no obvious benefit to you. The only motivation is that a small group of obsessively passionate (but not in a good way) people say at some unknown point in the future food won't be edible anymore.

IPv6 just tried to do too much so it failed at everything. Putting letters in IP addresses made it near impossible to remember what your network settings were supposed to be.

It is nothing short of a miracle that devices can even get IPv6 addresses. SLAAC was supposed to replace DHCP, but it couldn't provide DNS server addresses. DHCPv6 was introduced to replace SLAAC, but this time they forgot to add a way to communicate a default route. This lead to Cisco, Microsoft, and Google all taking completely different approaches, and the IETF helpfully blocking any efforts at cross vendor standardization because of v6 zealots.


Meanwhile, everybody else is using a plastic skull that they carry around with them to pre-chew the inedible food, which is the majority of the food you can get these days.

"It's not inedible," they say. "Just let me get my skull out."

> IPv6 just tried to do too much so it failed at everything. Putting letters in IP addresses made it near impossible to remember what your network settings were supposed to be.

People said the same sort of thing about v4: that it was hard to configure because you needed to know four separate addresses (IP, netmask, default route AND the DNS server) and if you mix up any of these it doesn't work.

As it turns out, in both cases it's just a lack of familiarity, not actual difficulty. The super complex jaw movement is just a regular bite, but you puff your cheeks out a bit. Or er, something.

> This lead to Cisco, Microsoft, and Google all taking completely different approaches [...] but this time they forgot to add a way to communicate a default route

"There should only be one way to do things... wait, no, not like that."


> I've been using Duplicati to sync a lot of data to S3's cheapest tape-based long term storage tier.

There are actually a lot of cheaper S3-compatible services out there, (like Backblaze B2, or Cloudflare R2). They pricing may work out to just backup to these directly. Certainly gives you far more control than Backblaze Backup.


Reminds me of the story of Polywater. https://en.wikipedia.org/wiki/Polywater


> a hypothesized polymerized form of water

With what chemical structure, even? That should have been the first red flag.


Stripe do this in a cool way. Their REST API is version based on date, and each time they change it they add a stackable compatibility layer. So your decade old code will still work.

https://stripe.com/blog/api-versioning


Ceph overheads aren't that large for a small cluster, but they grow as you add more hosts, drives, and more storage. Probably the main gotcha is that you're (ideally) writing your data three times on different machines, which is going to lead to a large overhead compared with local storage.

Most resource requirements for Ceph assume you're going for a decently sized cluster, not something homelab sized.


Arguably, "comma as a separator" is close enough to comma's usage in (many) written languages that it makes it easier for less technical users to interact with CSV.


Easier as long as they don't try to put any of those written languages in the CSV

Commas and quotation marks suddenly make it complicated


A bit out of context, but it reminded me of this funny moment. The only winning move is not to play.

https://www.youtube.com/watch?app=desktop&t=10&v=xOCurBYI_gY

(Background: Someone training an algorithm to win NES games based on memory state)


https://www.openssh.com/legacy.html - Legacy algorithms in OpenSSH, which explains a little what they do. Then there is also your Identity key that you authenticate yourself with, which is placed in the servers authorized_keys.


Usually smooth. But if you're running a production workload definitely do your prep work. Working and tested backups, upgrade one node at a time and test, read release notes, wait for a week after major releases, etc. If you don't have a second node I highly recommend it, Proxmox can do ZFS replication for fast live migrations without shared storage.


Unfortunately clustered storage is just a hard problem, and there is a lack of good implementations. OCFS2 and GFS2 exist, but IIRC there are challenges for using them for VM storage, especially for snapshots. Proxmox 9 added a new feature to use multiple QCOW2 files as a volume chain, which may improve this, but for now that's only used for LVM. (Making Proxmox 9 much more viable on a shared iSCSI/FC LUN).

If your requirements are flexible Proxmox does have one nice alternative though - local ZFS + scheduled replication. This feature performs ZFS snapshots + ZFS send every few minutes, giving you snapshots on your other nodes. This snapshot can be used for manual HA, auto HA, and even for fast live migration. Not great for databases, but a decent alternative for homelab and small business.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: