I can bring up an app on Linux or Windows from bare metal in minutes by hand. But the way it's supposed to be done now is something like this, right:
1) Use one of Chef/Puppet/Salt/Ansible to orchestrate
2) One if those in item 1 will use Vagrant which
3) Uses Docker or Kubernetes to
4) Set up a container which will
5) finally run the app
1) Developer pushes code to repo, its tested, if pass>
1) Rkt/Docker image made by CI/CD system and pushed to docker registry
2) Automatically deploy new image to staging for tests and master for production (can be a manual step)
3) Sit back and relax because my time invested up front saves me hassle in the future
I have 30+ nodejs apps that can be updated on a whim in under 5 seconds each.
Setting up bare metal instances, even when using something like ansible is slow and cumbersome compared to what I'm now doing with k8s
Doing an apply/patch to kubernetes nodes takes seconds (assuming services aren't affected).
edit: Sorry, for the unfamiliar by services I mean load balancers/ingress, it's a k8s term. It takes 45s-1minute to update glbcs so I modify them as rarely as possible.
We push our docker images to quay.io but pull+deploy is still manual. How does the target platform detect a new container on the registry in order to kick off a pull+deploy?
Take a look at Drone (http://readme.drone.io, not to be confused with the hosted Drone app). It allows you to execute arbitrary commands after publishing the Docker image. You can tell it to run kubectl to perform a deploy, for example.
Drone can also run CI tests and build stuff in temporary containers, which avoids the need to compilers etc taking up space in the final image, and negates the need for squashing (which busts your cache).
Much faster than Quay. Quay starts a new VM in AWS every time it builds, which is super slow. Drone just starts containers.
(You have to self-host Drone, though. Since it requires running Docker in privileged mode it's not a good fit for running under Kubernetes, unfortunately.)
(Kubernetes on GKE always runs in privileged mode, but that doesn't mean it's a good idea!)
Quay.io has been beta testing a new build system (based on Kubernetes under the hood) that should make builds a bit faster. If you're interested in testing it out, your organization can be whitelisted. Tweet at @quayio and we can get you set up.
Disclaimer: I work for Red Hat on OpenShift v3, an open-source enterprise-ready distribution of Kubernetes.
We have solved #2 by deploying an internal docker registry [1], push to it once a build succeeds, and automatically (or not) trigger deployments once an expected image lands in it [2].
In the future we will support webhooks so you can trigger deployments from external registries such as quay.io or DockerHub.
There are a number of ways to do this. You could have your CI system create the new object manifest (say nodeapp1.yaml) completely with the image: nodeapp:$version. You could have your CI system do a simple sed, like;
sed -i -e 's^:latest^:'$VERSION'^' kube/myapp-deployment.yaml
Regardless of how you do that, then your CI/CD can run simple kubectl commands to deploy the new container;
One of the problems here I've yet to tackle is if the deployment fails and you're doing rolling updates you won't get an exit for the build to fail in CI/CD. I need to do a call to the kubernetes api to see which revision of the container/deployment is now running and compare it to the previously running version to fully automate CI/CD. Haven't had time to set this up but I will soon unless I discover a better method.
Edit: Also I just realized my numbers are 1, 1, 2, 3. Woops.
One of the engineers at CoreOS has a WIP demo that spins up a staging instance of each GitHub PR on Kuberenetes[0]. Here's an example PR [1]. I imagine the scope of this tool could be extended to a staging cluster after merge.
Two to twenty hosts? Harder, especially when ensuring that the hosts are set up identically.
Twenty-one hosts on up? Very hard.
Automated creation and removal of hosts? Virtually impossible.
Steps 3 and 4 can be avoided if you like, using any number of techniques (including the good, old fashioned linux package with init files), but they do make a number of challenges disappear (such as consistency in code and deployment across hosts) when you run into those problems.
So, no, not everyone needs Kubernetes (I'd argue most people don't), but I can't see not using 1 or 2 (even if 1 is bash scripts).
The problem with "by hand" is now the dependencies or environment needed for that app to run are only in the brain of the person with that hand, or documented in a separate document that is prone to being out of date. With a build environment that includes containerization, all of your apps dependencies are guaranteed to be documented or it will not run correctly.
Your assessment is correct. It takes much longer to just run the app for the first time. The savings come in when you want to redeploy the app in multiple environments, test it, and develop new features on it. That's when all of these steps make sense - although, hopefully you'll be able to remove steps 1 & 2 when everything is a docker / Kubernetes workload.
Most of those technologies overlap. Except for some disaster development scenarios, no one is actually using all those technologies in the same environment (i.e. they're not using Chef and Vagrant and Docker in production, at the same time)
Chef/Puppet/Salt/Ansible these days are just used to provision 1 VM/container (if even that, many people have just reverted to shell scripts) and that container is then managed using a container management tool (Swarm/Kubernetes).
"(if even that, many people have just reverted to shell scripts)"
Isn't Ansible basically just a way of executing shell scripts across multiple hosts in a sensible manner? At least that's how it's been explained to me.
Yes. Most of the value of Ansible is the library 100 of tasks like copy a file and replace variables, or restart a service if the config changed. It can do a lot of work in a few lines... But can also be a bit too much if you are already in a single container.
'Bringing up an app' is the easiest part of ops work. Everything that happens after that is hard and immensely complicated. These tools do not exist for their own sake.
I'd argue that while containers have been attractive for that reason, they're actually really bad at it (layered binary blobs?) -- and good at the other stuff. It's pretty unfortunate that's been one of the main propelling forces behind containerization, I think we've ended up with subpar tools because of it.
Bringing up an app isn't the use case. Bringing up an app dozens, or hundreds, of times is.
Think of it like this. If you need to build one moderately simple widget, is it faster to build it by hand, or to build an entire factory to make it? Obviously, building it by hand. Now, how about if you need to build 100, or 100,000? How about if they all need to be exactly the same? Factories start to make sense at a certain point.
What? No. You don't use Chef to run Vagrant to run Kubernetes to set up a container to run your app. That doesn't make any sense, and makes me wonder if you are lashing out simply because you don't understand what these technologies actually do.
Managing one app on one machine (physical or virtual) is a significantly different problem from needed to manage dozens or hundreds of applications/servers.
When you need to automate the entire deploy process, possibly scale up or down the number of servers/services running based on demand, or to run the same operation across hundreds of servers: that is when you need Kubernetes. Trying to do any of these tasks manually when you're dealing with a large number of machines/services/servers becomes unmanageable very fast.
Can script all that yourself if you properly document it so somebody else can come along a figure out exactly what is going on should you disappear/quit/call in sick/hit by bus.
Elastic Search 'beats' ( https://www.elastic.co/products/beats ) can do all the monitoring/stats, deploying snapshots to multiple servers can also be automated in Go or drop a Go binary on the server for some kind of Command&Control architecture for continuous remote maint, starting/stopping containers ect (assuming all security precautions have been considered).
This worked for a deployment of ~75 docker containers ymmv
1) Use one of Chef/Puppet/Salt/Ansible to bootstrap docker
2) Use docker to build a container for the app (once)
3) Use docker to start your container on multiple hosts