Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why is everyone so afraid to get a $5/mo Ubuntu/Debian VPS, install nginx and call it a day?

Then you can even run multiple projects off the same server.

 help



It means you take responsibility of maintaining the server forever, i.e. dealing with TLS certificates, SSH keys, security updates, OS/package updates, monitoring, reboots when stuck, redeploy when VPS retired, etc. Usually things work fine for a year or two and then stuff starts to get old and need attention and eat your time.

As someone who runs a such a VPS this is all a non-issue. Running HTTP service is so trivial that once I set it up I don’t even spend an hour in a year maintaining it. Especially with Caddy which takes care of all the certs for you.

And this is also bearing in mind that I complicate my setup a bit by running the different sites in docker containers with Caddy acting as a proxy.

With storage volumes for data and a few Bash scripts the whole server becomes throw-away that can be rebuilt in minutes if I really need to go there.

And for sure any difficulty and ops overhead pales in comparison to having to manage tooling and dependencies for a typical simple JS web-app. :)


Oh no! Issuing SSL certificates! The horror!

I really doubt that people who can’t install an ssh key should be able to practice software engineering. Sometimes, I think that software engineering should be a protected profession like other types of engineering. At least it will filter out the people who can’t keep their OS up to date.


This is not about how easy or difficult it is to issue TLS certificates, to configure SSH keys or to update the OS. It's about having to actively maintain them yourself in every possible situation until eternity, like when TLS versions are deprecated, SSH key algorithms are quantum-hacked, backward-incompatible new OS LTS versions are released, and so on. You will always have new stuff come up that you need to take care of.

This is all trivial, and can and should be automated. Furthermore, all of your arguments can easily be applied to NodeJS version deprecations, React realizing they shipped a massive CVE, etc.

I will die on this hill: parent is correct - the ability to manage a Linux server should be a requirement to work in the industry, even if it has fuck-all to do with your job. It proves some basic level of competence and knowledge about the thing that is running your code.


I'm curious about this trivial automation. Let's say the new OS LTS version no longer includes nginx, because it was replaced by a new product with different config. How does the automation figure out what the new server package is and migrate your old Nginx config to the new format?

I agree with Node.js version deprecations being a huge problem and personally advocate for an evergreen WebAssembly platform for running apps. Apps should run forever even if the underlying platform completely changes, and only require updating if the app itself contains something that needs updating.


The answer is to write your server in portable C++, and just rebuild it for whatever new OS you're dealing with.

The speed. Imagine the performance. There are plenty of mature C++ web server frameworks, it's really not difficult. If you're afraid of C++, you could choose something else. Rust if you're insane, or golang if you're insane but in a different way.

Anyway. Nginx is not going away, so the argument is a bit silly. "What if js went away". Same thing.


If an LTS of an OS replaced nginx with something else, a. it would be announced with great fanfare months in advance b. if you don’t want to do that, add apt / yum / zypper install nginx to your Ansible task, or whatever you’re using.

The things that you just described are not automation, but human activities needed to tackle the new situation by following news and creating new automation. Which kind of proves my point that you cannot prepare for every unexpected situation before it actually happens. Except maybe with AI in the future.

When AWS announces that they’re EOL’ing the Python or NodeJS version in your Lambda, or the version of your RDS cluster, etc. you also are required to take human action. And in fact, at any appreciable scale, you likely want that behavior, so you can control the date and time of the switch, because “zero downtime” is rarely zero downtime.

Yes, and like I mentioned in another comment, I consider this a major painpoint and problem with Node.js based applications. I have high hopes that eventually there will be an "evergreen" WebAssembly based Lambda function runtime.

I keep reading posts like this, but the people who say this never actually seem to enlighten the rest of us troglodytes by, say, writing a comprehensive, all inclusive, guide to doing this.

If it's so easy, surely it's no big undertaking to explain how one self hosts a fully secured server. No shortcuts, no "just use the usual setup" (we don't know what it is!), no skipped or missed bits. Debian to Caddy to Postgres, performant and fully secure, self upgrading and automated, from zero to hero, documenting every command used and the rationale for it (so that we may learn).

Or is it perhaps not as simple as you say?


The parent I responded to was discussing issuing certs, configuring SSH keys, and updating an OS. Those are all in fact trivial and easily automated.

What you have stated requires more knowledge (especially Postgres). You’re not going to get it from a blog post, and will need to read actual source docs and man pages.


The original claim was "People shouldn't even be in the industry unless they can administer a Linux server, even if that has nothing to do with their role." It is a very significant moving of the goalposts to now suggest this is all about "updating an OS". That's not a good faith claim.

This whole thing is merely cheap online snark masquerading as wisdom. No, not all SWEs know how to maintain Linux servers, and many (most?) SWE roles have all of zero overlap with that kind of work. If businesses could fire all their expensive server admins and replace them with some college kid and a $5 VPS, they would long since have done so.

If this is anything more than poseur snark, put your money where your mouth is and either write a comprehensive resource yourself, or at least compile a list of resources that would suffice for someone to be able to securely run and maintain a live server in production. No, not Hello Worlds, actual prod. Then, when next this comes up, link us to your guide rather than just spraying spittle on the plebs who lack your expertise.

Do something more constructive than low effort snark.


I intermingled the two claims, you’re correct, and was not intending to move the goalpost. I apologize.

Claim one: setting up unattended-upgrades, SSH keygen, and automating cert rotation is trivial and easily automated.

Claim two: you should know how to manage a Linux server. Here are docs.

https://tldp.org/

https://www.man7.org/linux/man-pages/dir_all_by_section.html

https://nginx.org/en/docs/

https://www.postgresql.org/docs/current/index.html


They don't write the guide because by the time they've written the guide to an appropriate level of specification, the result they've produced is an off-the-shelf service provider not unlike the ones they're railing against.

I self host my own server and this isn't something that takes much time per year. You're making it sound like a day job. It's not really. As long as you have a solid initial config you shouldn't have to worry.

Exactly. Also, being that my specialty is writing software and not server maintenance, no matter how much of an effort I put forth there's substantial risk of blind spots where holes can lurk.

I felt more comfortable maintaining a VPS back between 2005 and 2015, but at that point attackers were dramatically less sophisticated and numerous and I was a lot more overconfident/naive. At least for solo operations I'm now inclined to use a PaaS… the exception to that is if said operation is my full time job (giving me ample time to make sure all bases are covered for keeping the VPS secure) or it's grown enough that I can justify hiring somebody to tend to it.


Caddy server even does ssl for you automatically.

Caddy runs on top of Go's excellent acme library that handles all of the cert acquisition and renewal process automatically.

I get that if you get a problem then it'll take a bit of work to fix, but all of this seems like a lot less work than dealing with support for a platform you don't control.


Time is a precious (and really expensive for SWEs) resource, why should one spend it on updating certs and instances?

They shouldn't, that's why self hosted PaaS already do it for you, it's not a differential reason to use cloud services instead just because they do it for you too.

You don’t, you automate it. This has been a solved problem for literally years.

Now you have to maintain the automation. There is nothing wrong with that. There is nothing wrong with building your own server. There is nothing wrong with colocation. There is nothing wrong with driving to the colo to investigate an outage. There is nothing wrong with licensing arm and having TSMC fab your chip. There is nothing wrong with choosing which level of abstraction you prefer!

certbot and ssh keys are things you set up once

I haven't rebooted my DO droplets in something like 5 years. I don't monitor anything. None of them have been "retired".


This is the kind of stuff a software develop should have absolutely no problem managing. It's crazy to me that so many software developers hate the idea of maintaing a computer.

just ask claude to do all that :), he is excellent and installing & managing new servers and making sure all security patches are updated. Just be careful if its a high risk project.

You clearly haven't tried doing that in quite a long while.

Using SSH keys + fail2ban means that for a simple static site, it will be sufficient for a decade at least.

TLS certificates get auto-renewed with letsencrypt every 3 months via certbot.

Installing security updates depends heavily on what is your threat model, if you're just displaying some static content you fully own, you'll be usually fine.

Literally never seen a VPS being "retired", if it happened to you, change provider.

I've got a bunch of VPS running for 10+ years, I never need to touch them anymore.

My homelab has been going strong for the past 8 years. I did have to do some upgrade/maintenance work to go from being an old laptop without screen to a minitower low power machine, and when I added 30TB of storage. Other than that, it's running smoothly, it also uses TLS and all the rest.


vs. trusting someone else to do all that for you, and do you then verify that it gets done properly?

When buying the infrastructure as a managed cloud service, yes, I trust that they've got people handling it better than I could myself. The value proposition is that I don't even see the underlying infrastructure below a certain level, and they take care of it.

This is extremely easy with tools like dokploy tho... I use dokploy locally to manage all my VPSs + home server. Truly good stuff and I don't believe your quip at the end, it feels like poisoning the open source waters for consolidated anti democratic cloud platforms.

It's way way way way easier managing a basic VPS that can be highly performant for your needs. If this was 2010, I'd agree with you but tooling and practices have gotten so much better over the last decade (especially the last 5 years).


Maybe you're right - I've never tried dokploy, but from documentation it sounds like mostly a deployment, monitoring and alerting tool. For me the problem has always been that once you get the alert (or something just stops working), a human needs to react to it and make things work again. In cloud services you mostly pay for them providing the human, and in self-hosting you're the human.

I can see though that today's AI models could eventually replace the human in the loop and truly automatically fix every possible situation.


I must be using the wrong cloud services. Whenever a part of our app goes down someone on the team still needs to respond to it.

You might be right. I've been mostly using serverless / managed cloud services such as AWS Lambda, API Gateway, S3, DynamoDB for the past 10+ years. When I've needed to respond, it's been because I myself deployed a bad update and needed to roll it back, or a third party integration broke. The cloud platform itself has been very stable, and during the couple of bigger incidents that have happened, I've just waited for AWS to fix it and for things to start working again.

you actually need new ops teammates, not new cloud services :)

yeah i've had more downtime on managed db's & cloud servers then on my own managed VPS. And if it happens, with VPS i can normally fix it instantly compared to waiting 20-60 min for a response, just to let you know they start fixing it. And when they fix it, it doesnt always mean your instance automatically works.

Agreed, Dokploy is great, not sure why you got downvoted for the suggestion.

IDK, I only found out about Dokploy six months ago. The tools nowadays for managing small hosted solutions is absolutely amazing. You can do a lot with a single VPS if you avoid bloated software choices.

People often forget there is a massive economy out there for niche solutions and if you're a small team you don't exactly need a large slice to make a nice life for yourself.


I don't even bother setting up VPS instances by hand. If you have gmail then you have access to Google Cloud, and they offer a free tier of Cloud Run that comfortably covers anything you might do on a personal project.

You basically create a github, put a dockerfile inside it with your nginx config, frontend files, backend etc., then push and the Cloud Run instance is built for you then deployed into production. By default you are paying only for active requests, when a http request hits your box GCP will wake it up, charge for the CPU time used for serving it, then leave it idle for free for about 15 minutes. If another hit comes in that interval, you have instantaneous response because the instance is warm, otherwise it will wake up again and see a few seconds of latency (ie. during the night, when you have few visitors etc).

It also scales up automatically if you have substantial traffic, you don't have to do anything other than design your application so that multiple instances hitting the same data storage (ex. Firestore) will play nice. It of course handles all security, versioning, HTTPS certs etc. for you, you are simply serving plain HTTP traffic within the GCP internal network and just make sure your own application (what you push to git) is secure.

The things you pay for are outbound traffic (for obvious reasons like warez etc.) as well as storage of docker images (Artifact registry, i think you only have 0.5GB free, about 3 alpine images), but you can easily set up a rule to auto-delete old images.

Overall, you can run a small business with daily/weekly updates for less than a dollar a month and hit 5 nines availability, which you will never achieve for a self-administered VPS. Sorry if it sounds like an advertisement, but it's just enormous value for a small builder.


I still think you described using a VPS but with a tons of extra steps, expenses and then being tied to an evil corporation people are trying to move past.

You get a generic VPS and you can do whatever the hell you like, not paying bigG for some "obvious reasons" like outbound traffic.

And a small business will never need 5 nines availability, that's just the propaganda from big tech to over engineer and pay them for that. You can run a small/medium business and be offline for 1 hour every day (makes it 95.8%) and still be fine. It's when you're worldwide and not that small that you want better availability.

Also, you know all those AWS outages? My VPSs were never impacted to the slightest!


A docker image host is NOT a VPS with extra steps, because a VPS is a server and needs to be administered professionally as a server by someone competent for that job, that excludes 90% of developers who are willing to spend only one hour per year for this task. Think about running mail servers, you can do it manually but to do a good job you need to invest so much time and effort that almost everyone doing it will throw in the towel eventually.

And while I agree with the sentiment of resisting encloudification, you can take your docker image to any other host if you want, it's a generic service. in a pinch, you can build your own and have 100% control just like the VPS case.

The point is that you don't have to, you just git push into production and forget about it. that's a good few dozens less "extra steps" than the VPS route.


I just did this over at Hetzner and Claude admins it for me so I don't need to learn the CLI or anything, describe the proxying I want, and it setups up a bunch of small side project pages for me.

How do you use Claude to admin it? Does Claude SSH into the server and do everything or just write bash scripts?

ssh into the VPS and launch ClaudeCode directly on the machine, so it has full access

For me I always default to UpCloud, great team and great services. From Finland!

Or a homelab using Proxmox or Unraid.

No click-ops that way.

To be fair, I never have to click anything either since it's via SSH :D



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: