I think the "local maximum" we've gotten stuck at for application hosting is having a docker container as the canonical environment/deliverable, and injecting secrets when needed. That makes it easy to run and test locally, but still provides most of the benefits I think (infrastructure-as-code setups, reproducibility, etc). Serverless goes a little too far for most applications (in my opinion), but I have to admit some apps work really well under that model. There's a nearly endless number of simple/trivial utilities which wouldn't really gain anything from having their own infrastructure and would work just fine in a shared or on-demand hosting environment, and a massively scaled stateless service would thrive under a serverless environment much more than it would on a traditional server.
That's not to say that I think serverless is somehow only for simple or trivial use cases though, only that there's an impedance mismatch between the "classic web app" model, and what these platforms provide.
You are ready for misterio: https://github.com/daitangio/misterio
A tiny layer around stareless docker cluster.
I created it for my homelab and it gone wild
Docker is much like microservices. Appropriate for a subset of apps and yet touted as being 'the norm' when it shouldn't be.
There are drawbacks to using docker, such as security patching and operational overhead. And if you're blindly putting it into every project, how are you mitigating the risks it introduces?
Worse, the big reason it was useful, managing dependency hell, has largely been solved by making developers default to not installing dependencies globally.
We don't really need Docker anywhere near like we used to, and yet it persists as the default, unassailable.
Of course hosting companies must LOVE it, docker containers must increase their margins by 10% at least!
Someone else down thread has mentioned a tooling fetish, I feel Docker is part of that fetish.
It has downsides and risks involved, for sure. I think the security part is perhaps a bit overblown, though. In any environment, the developers either care about staying on top of security or they don't. In my experience, a dev team that skips proper security diligence when using Docker likely wouldn't handle it well outside of Docker either. The number of boxes out there running some old version of Debian that hasn't been patched in the last decade is probably higher than any of us would like.
Although I'm sure many people just do it because they believe (falsely) that it's a silver bullet, I definitely wouldn't call it part of a "tooling fetish". I think it's a reasonable choice much more often than the microservice architecture is.
Hard disagree. I've used Docker predominantly in monoliths, and it has served me well. Before that I used VMs (via Vagrant). Docker certainly makes microservices more tenable because of the lower overhead, but the core tenets of reproducibility and isolation are useful regardless of architecture.
There's some truth to this too honestly. At $JOB we prototyped one of our projects in Rust to evaluate the language for use, and only started using Docker once we chose to move to .NET, since the Rust deployment story was so seamless.
Haven't deployed production Java in years, so I won't speak to it. However, even with Go's static binaries, I'd like to leverage the same build and deploy process as other stacks. With Docker a Go service is no different than a Python service. With Docker, I use the same build tool, instrument health checks similarly, etc.
Standardization is major. Every major cloud has one (and often several) container orchestration services, so standardization naturally leads to portability. No lock-in. From my local to the cloud.
Even when running things in their own box, I likely want to isolate things from one another.
For example, different Python apps using different Python versions. venvs are nice but incomplete; you may end up using libraries with system dependencies.
I deeply disagree. Docker’s key innovation is not its isolation; it’s the packaging. There is no other language-agnostic way to say “here’s code, run it on the internet”. Solutions prior to Docker (eg buildpacks) were not so much language agnostic as they were language aware.
Even if you allow yourself the disadvantage that any non-Docker solution won’t be language-agnostic: how do you get the code bundle to your server? Zip & SFTP? How do you start it? ./start.sh? How do you restart under failure? Systemd? Congrats, you reinvented docker but worse. Want to upgrade a dependency due to a security vulnerability? Do you want to SSH into N replicated VMs and run your Linux distribution specific package update command, or press the little refresh icon in your CI to rebuild a new image then be done?
Docker is the one good thing the ops industry has invented in the last 15 years.
This is a really nice insight. I think years of linux have kind of numbed me to this. I've spent so much time on systems which use systemd now that going back to an Alpine Linux box always takes me a second to adjust, even though I know more or less how to do everything on there. I think docker's done a lot to help with that though since the interface is the same everywhere. A typical setup for me now is to have the web server running on the host and everything else behind docker, since that gives me the benefit of using the OS's configuration and security updates for everything exposed to the outside world (firewalls, etc).
Another thing about packaging. I've started noticing myself subconsciously adding even a trivial Dockerfile for most of my projects now just in case I want to run it later and not hassle with installing anything. That way it gives me a "known working" copy which I can more or less rely on to run if I need to. It took a while for me to get to that point though
It's all the same stuff. Docker just wraps what you'd do in a VM.
For the slight advantage of deploying every server with a single line, you've still got to write the mutli-line build script, just for docker instead. Plus all the downsides of docker.
There's another idea too, that docker is essentially a userspace service manager. It makes things like sandboxing, logging, restarting, etc the same everywhere, which makes having that multi-line build script more valuable.
In a sense it's just the "worse is better" solution[0], where instead of applying the good practices (sandboxing, isolation, good packaging conventions, etc) which leads to those benefits, you just wrap everything in a VM/service manager/packaging format which gives it to you anyway. I don't think it's inherently good or bad, although I understand why it leaves a bad taste in people's mouths.
Docker images are self-running. Infrastructure systems do not have to be told how to run a Docker image; they can just run them. Scripts, on the other hand, are not; at the most simple level because you'd have to inform your infrastructure system what the name of the script is, but more comprehensively and typically because there's often dependencies the run script implies of its environment, but does not (and, frankly, cannot) express. Docker solves this.
> Docker just wraps what you'd do in a VM.
Docker is not a VM.
> Plus all the downsides of docker.
Of which you've managed to elucidate zero, so thanks for that.
That's not to say that I think serverless is somehow only for simple or trivial use cases though, only that there's an impedance mismatch between the "classic web app" model, and what these platforms provide.