Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I still don't understand the difference between serverless vs just a Docker container. You still have to define your dependencies and entrypoints. Most serverless stuff runs in some type of abstraction behind the scenes (a container or some other chroot/namespace).

It just feels like silly buzzword garbage that locks you in further to AWS.



We used serverless/lambda to run our flask app for flaptastic.com. Since the site doesn't see a lot of web traffic, our web costs are roughly $0 per month whereas running any kind of long running web server platform (docker or otherwise) has fixed costs.


Lambda is a FaaS server-less platform. Think of it as:

- virtual machines abstracted away physical hardware administration

- containers abstracted away the operating system administration

- FaaS server-less abstracts away the container and/or container orchestration


I mostly think of serverless as abstracting the actual way things run more and having a more specific contract than "exposes a port". I imagine this allows for deeper integrations and features like handling cold starts. It is difficult to mitigate a lot of these problems without a good contract/interface that allows it, and an arbitrary container doesn't really let you do that.

It's similar to buildpacks on Heroku. How are they all that different from containers? Well, Heroku runs buildpacks in their own container system, so they're not much different, but Heroku can more easily control updates to packages and the base OS of their containers by not allowing you to run arbitrary container images.

In the end, I think of these as higher level abstractions, often building upon containers, but not allowing as much flexibility so the "serverless runtime" can optimize more things.


Serverless and Docker containers are orthogonal. On some clouds, like IBM and I think Azure, you can even run Docker containers on the serverless infra.

Serverless means you don't need to think about scaling and managing a cluster


Pretty much my evaluation as well. Maybe it could be useful for arbitrary compute operations that you aren't entirely sure how long will take to run? Like computing an implied vol surface "all at once" when you don't own your own data center or something and aren't doing it all the time.

Another point of contention in my view is the fact that either they'll lock it down like they do on those online "computing environments" where you can't do any low level stuff or any networking (like IDEone, etc), or it'll be too open and thus you can use it to spread malware, or run DDOSes.

I guess that it makes sense that such an offering is present in the modern ecosystem among many other offerings but other than the interest of completeness, I don't really see the value-add. But I'm just a random engineer on the internet, I probably am not representative of all potential use-cases




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: