That sounds to my self-hoster ears as an expensive way to do self hosting. Isn't at least half the point of AWS to use their SaaS thingies that all integrate with each other, I think people now call it "cloud native software"?
Not that most of our customers whose AWS environment we audit do much of this, at least not beyond some basics like creating a vlan ("virtual private cloud"), three layers of proxies to "load balance" a traffic volume that my old laptop could handle without moving the load average above 0.2, and some super complex authentication/IAM rules for a handful of roles and service accounts iirc
(The other half of the point is infinite scale, so that you can get infinite customers signing up and using the system at once (which hopefully pay before your infinite AWS bill is due), but you can still do that with VPSes and managed databases/storage.)
The point of moving to AWS is often to benefit from their data centers and reliability promises. So VPC, EC2, IAM, maybe S3 have a clear point.
And one small note: apart from S3, virtually all AWS services are tied to a VPC, any kind of deployment starts with "ok, in what VPC do you want this resource?".
You can get 95% of the reliability for 10% of the price at any dedicated hoster. AWS just figured out the magic word "cloud" means they can charge you 10 times the price.
At Azure or GCP you pay a similar price but you don't even get the reliability so literally why would you use them? The only reason I see is that "cloud" means you can start instances at any time without a setup fee or contract duration. But with the amount of cost difference, you could have three times your baseline "cloud" load running all the time at a non-cloud hoster, and still save money!
It is an expensive way to do self hosting, yeah! I guess one reason is, sometimes it’s easier to just use one of big N clouds – e.g. if you’re in a regulated industry and your auditors will raise brows if they see a random VPS provider instead. (Or maybe not? If you’re doing that kind of audits I’d love to hear your thoughts!)
> Isn't at least half the point of AWS to use their SaaS thingies
It is. (That’s how they lock you in!) I think it’s okay to use some AWS stuff once in a while, but I’d be wary of building your whole app architecture around AWS.
I’m in the self-hosters camp myself :-) I’m building a Docker dashboard to make it easier to build, ship, and monitor applications on any server: https://lunni.dev/
The is sort of petty, but… your web page does the horrible scroll-and-watch-the-content-fade-in thing. This is annoying and makes me want to stop reading. It also makes me wonder if your product does the same thing, which would be a complete nonstarter. Seriously, if I’m debugging some container issue, I do not want some completely unnecessary animation to prevent me from seeing the dashboard.
Thanks for the feedback! No, the product’s UI is definitely on the pragmatic side: I think the only blocking animation we have right now is dialog windows sliding in (and we try to avoid these altogether!) Both the landing page and the app disable animations when prefers-reduced-motion is enabled.
I’ll rethink the landing page animations a bit later! (I was thinking about redoing it from scratch again, anyway :^)
> I’d be wary of building your whole app architecture around AWS.
Yep yep, we're on the same page here, just that it sounded a bit like you recommend people to use EC2, managed Postgres, and S3, when that seems a bit like that defeats the purpose of using a big cloud vendor in the first place since those products can be gotten much cheaper elsewhere
Just clicked the link to your website: your hero statement I sorta have on a t-shirt (there is no cloud, just someone else's computer). The footer statement "To improve your browsing experience we don’t use cookies" I have nearly literally on one of my websites. We are definitely on the same page xD. The website seems to think I'm from Talinn btw, maybe a reverse proxy issue (that it looks up the IP address' location of a proxy instead of that of my German IP address)? I do second the remark posted in a sibling comment about fading in content, but (maybe you adjusted it already) it fades fast enough that it doesn't bother me as much as on some other websites. Could still be faster though imo
I’ve just dropped you a note in the chat. We’re also on Swarm, and dealing with most of the stuff you address, and some more. Would love to contribute to Lunni, if you’re open to that :)
This is GCS implementing the S3 API incorrectly in a way that really ought not to break clients, but it’s still odd because the particular bug on GCS’s end seems like it took active effort to get wrong. But it’s also boto (the main library used in Python to access S3 and compatible services) doing something silly, tripping over GCS’s bug, and failing. And it’s AWS, who owns boto, freely admitting that they don’t intend to fix it, because boto isn’t actually intended to work with services that are merely compatible with S3.
As icing on the cake, to report this bug to Google, I apparently need to pay $29 plus a 3% surcharge on my GCS usage. Thanks.
> As icing on the cake, to report this bug to Google, I apparently need to pay $29 plus a 3% surcharge on my GCS usage. Thanks.
That's the price of a support contract, not a "bug report". And it's not "plus", it's "or": support costs $29/month or 3% of your monthly billing, whichever is greater. It comes with SLA agreements for fixing or working around your reported problems. Though obviously in this case they'll probably just tell you to use their own python library and not boto.
Oh, thanks Google, if my cloud spend is more than $967/mo, then I don’t get dinged by the $29 minimum. But this is the price of a bug report, because I can’t file a bug report without paying it.
And this situation is bad business. Google advertises that GCS has S3 interoperability support. And they have customers who use it in its interoperable mode. Presumably those customers could use GCS’s biggest competitor, too. Shouldn’t Google try to make the S3 interop work correctly?
GCS and AWS are commercial products for which you pay, not open source projects you can expect to support you for free. I don't know what to say here, if you have something "serious" to do on these platforms then a 3% overhead for support seems like an obvious choice.
Honestly it seems to me like you're excited to have found a bug and want to report it for glory; we've all been there. But No One Cares about that stuff in the world of commercial software. They fix bugs for real customers, not internet rock stars. If you aren't losing even $29 (one mid-tier meal!) from this bug, well... does it even rise to the level of "yell about it on HN?".
S3 isn't a spec. You'll never be 100% compatible with a third party product with ad hoc behavior. Companies do what they can as a migration aid, but at the edges there will always be bugs like this (which I'm gathering is really an issue with boto being inflexible about unexpected details and not so much "incompatible" behavior by GCS).
Again, this isn't an open source product. It's not like there's a universal agreement that coordination and compatibility are ideals to be respected, nor a public place for discussion and development. Google is trying to make GCS as S3 compatible as possible, but obviously they prioritize their paying customers (who seem not to have reported this issue). Amazon does not want GCS to be boto-compatible at all, really.
It is also the reason to use proprietary bullshit services. If there was not utility gap, then a reasonable evaluation would return that a migration to them is not worthwhile.
The S3 system is proprietary to Amazon, and it's your fault if you're not using Amazon but you're relying on Amazon to not change it anyway, because they have no obligation to you.
The concept of object storage is not proprietary. You should be able to change your code to use a different object storage provider.
I didn't mean to throw any shade at S3. It certainly isn't their fault that people use their private protocol on other projects.
I was trying to point out that S3's protocol is "proprietary" so if you're using it you're still (somewhat) using "proprietary bullshit" in your stack!
I mean yeah, it isn’t really open – but it’s the informal “standard” that everyone is using nowadays. You can’t really expect every cloud provider and every app to just switch to WebDAV or something (though we can push for a gradual change here somehow!)