Hacker Newsnew | past | comments | ask | show | jobs | submit | buckeyeCoder's commentslogin

I actually felt the same way about ME 1. I didn't find the game deep or compelling.


REAMDE (http://www.amazon.com/Reamde-Novel-Neal-Stephenson/dp/006197...) by Neal Stephenson is an interesting, fictional take on what might happen if an MMORPG embraced the farmers.

It's also a really quick read for a Stephenson book.


I don't know how this slipped under my radar, thanks for the recommendation!

Play Money is a fantastic non-fiction account of online gaming economies if anyone wants a more extensive non-fiction treatment of the topic.


Scalablity is nice, but what about redundancy? Not every website can go down for 6 hours while a tech repairs a host in a data center.

There are also challenges with putting all components on a single host. It's simple, but the components are not all going to scale the same way. And depending on how the datastore is partitioned, you'll still make remote calls anyway.


I've seen at least as many outages caused by problems in the additional complexity implemented to avoid having a single point of failure as I've seen outages caused by having one.

Plus given something like drbd having a cold spare that's trivial to spin up isn't that hard to do (and has the nice advantage of being relatively data storage technology agnostic).


The not-so-nice disadvantage that your cold spare can't actually do any work (like serve read traffic) and that if your application itself corrupts data DRBD will dutifully mirror that corruption. Hopefully the spare can still perform well with cold caches, but I guess a slow site is considerably better than a dead one.


If your secondary is doing work, then you'll get a performance degradation from losing the primary anyway.

The difference here is that once the slave's warmed up you're back to full speed, whereas with a hot-spare-being-read-from the performance degradation lasts until you bring the other box back.

Any such corruption is effectively a buggy update - normal replication will propagate a buggy write just as happily, and even if it crashed the node entirely there's a good chance your application's retry logic will re-run the write against the slave in a moment.


First you fail over to another server, then you repair the original failed server; this takes repair out of the critical path. This generally requires no changes to an app as long as it's crash-safe. There's plenty of software to do this (e.g. Heartbeat, Red Hat Cluster) but because it doesn't work in the cloud people forgot about it.


If you have backups and repeatable deployments, it's not any harder to just spin up another host than it is to replace one of many points of failures in a decentralized architecture.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: