Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Many reasons prevent this from being practical for any serious purposes.

1. It depends on what part of the world you are in, but many homes have cooling needs for at least part of the year. The needs to remove excess heat would go up if you are adding more heat -- and it is less efficient to do this at the scale of an individual home than it is at DC scale.

2. Power requirements: While many homelabs have UPS systems -- they lack often lack backup generators, redundant A+B power infrastructure, and don't have the required power density for higher powered servers.

3. Connectivity requirements: most homes don't have access to the connectivity that data centers do.

4. Security requirements: homes simply can't meet the security requirements of most data protection regulations -- things like barriers, access control systems, surveillance, fire protection -- are anywhere from intrusive to completely impossible in a home.

5. Access requirements: homes aren't conducive to a technician responding to an outage at 3am

And those are just the big ones.

 help



1. Many places in the world don't ever need cooling

2. If servers are distributed then downtime is distributed, you can virtually guarantee that some of the servers over the world will be online so you can get effectively 100% uptime, something that is not possible in a data center

3. To serve tokens you need very little bandwidth, it's just text in and out

4. All of this is down to the HW and the SW itself, not the building. That is, the box that's being deployed.

5. Just switch to a different server until the problem is resolved, in this model there is no urgency. You just need redundancy which you can afford with how much cheaper this would be.


> 1. Many places in the world don't ever need cooling

And data centers also exist in cold places. But if you put 8kw of extra heat in someone's home that previously didn't need cooling, it might need it now.

> 2. If servers are distributed then downtime is distributed, you can virtually guarantee that some of the servers over the world will be online so you can get effectively 100% uptime,

You can! But running more servers with worse uptime is less efficient and requires more capital expense than running fewer servers with better uptime.

> something that is not possible in a data center

This is not only possible, this is how the large clouds are architected. This is what availability zones are for.

> 3. To serve tokens you need very little bandwidth, it's just text in and out

bandwidth is only one of the many connectivity advantages that datacenters provide... and LLMs are a bad choice to run residentially for other reasons, particularly power density

> 4. All of this is down to the HW and the SW itself, not the building.

Absolutely not -- basically all industry data protection standards have physical security standards. At least, any of the ones that matter.

> 5. Just switch to a different server until the problem is resolved, in this model there is no urgency.

That is true, there are data centers without 24/7 access. They tend to struggle to compete, though.

> You just need redundancy which you can afford with how much cheaper this would be.

Is it? Residential power and cooling costs more -- and that's the majority of the cost to colocate servers


> 1. And data centers also exist in cold places. But if you put 8kw of extra heat in someone's home that previously didn't need cooling, it might need it now.

That's the entire point of being in a cold place that you don't need active cooling. Just open the window.

> 2. But running more servers with worse uptime is less efficient and requires more capital expense than running fewer servers with better uptime.

Even if the cooling is free? Not even free, the cost is negative since it saves heating cost.

> 3. and LLMs are a bad choice to run residentially for other reasons, particularly power density

Can you explain the connection of LLMs to power density? This point makes no sense.

> 4. Absolutely not -- basically all industry data protection standards have physical security standards. At least, any of the ones that matter.

You can lock a box physically

> 5. That is true, there are data centers without 24/7 access. They tend to struggle to compete, though.

Why though if redundancy exists, like you said? Would they still struggle to compete if the cooling cost was effectively negative?

> 6. Is it? Residential power and cooling costs more -- and that's the majority of the cost to colocate servers

You can make cooling cost negative, if that's the majority of the cost, then that's great! And you can also place your servers in residential areas with the cheapest power.


> That's the entire point of being in a cold place that you don't need active cooling. Just open the window.

> Even if the cooling is free? Not even free, the cost is negative since it saves heating cost.

Again, having cold air outside is not unique to residential homes. Locating somewhere cold is a strategy for cooling data centers as well. But it doesn’t make environmental management free. You still need to control humidity and move heat. You can’t just run a server outside. However, it isn't the only concern for hosting a compute workload.

> Can you explain the connection of LLMs to power density? This point makes no sense.

The system requirements for a single server to run a frontier workload is a system that would overwhelm the power requirements of practically all residential electrical systems.

> You can lock a box physically

If only ISO 27000/SOC/NIST SP 1800/PCI DSS/ etc were all that easy lol

> Why though if redundancy exists, like you said? Would they still struggle to compete if the cooling cost was effectively negative?

Because of the additional capital costs associated with buying more servers, the additional operational costs of inconveniencing your employees, and the additional operational costs of powering/housing servers that are down.

> You can make cooling cost negative, if that's the majority of the cost, then that's great!

It isn't, most power in a data center is spent to power compute. Even if you do harness waste energy (which some data centers do), it is at best 100% efficient. Residential heat-pumps already have effective efficiencies better than this.

> And you can also place your servers in residential areas with the cheapest power.

And in those places, industrial rates are typically even lower.

Scale is always cheaper.


If you don't need to manage cooling?

Then you are likely needing to manage humidity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: