1) Developer pushes code to repo, its tested, if pass>
1) Rkt/Docker image made by CI/CD system and pushed to docker registry
2) Automatically deploy new image to staging for tests and master for production (can be a manual step)
3) Sit back and relax because my time invested up front saves me hassle in the future
I have 30+ nodejs apps that can be updated on a whim in under 5 seconds each.
Setting up bare metal instances, even when using something like ansible is slow and cumbersome compared to what I'm now doing with k8s
Doing an apply/patch to kubernetes nodes takes seconds (assuming services aren't affected).
edit: Sorry, for the unfamiliar by services I mean load balancers/ingress, it's a k8s term. It takes 45s-1minute to update glbcs so I modify them as rarely as possible.
We push our docker images to quay.io but pull+deploy is still manual. How does the target platform detect a new container on the registry in order to kick off a pull+deploy?
Take a look at Drone (http://readme.drone.io, not to be confused with the hosted Drone app). It allows you to execute arbitrary commands after publishing the Docker image. You can tell it to run kubectl to perform a deploy, for example.
Drone can also run CI tests and build stuff in temporary containers, which avoids the need to compilers etc taking up space in the final image, and negates the need for squashing (which busts your cache).
Much faster than Quay. Quay starts a new VM in AWS every time it builds, which is super slow. Drone just starts containers.
(You have to self-host Drone, though. Since it requires running Docker in privileged mode it's not a good fit for running under Kubernetes, unfortunately.)
(Kubernetes on GKE always runs in privileged mode, but that doesn't mean it's a good idea!)
Quay.io has been beta testing a new build system (based on Kubernetes under the hood) that should make builds a bit faster. If you're interested in testing it out, your organization can be whitelisted. Tweet at @quayio and we can get you set up.
Disclaimer: I work for Red Hat on OpenShift v3, an open-source enterprise-ready distribution of Kubernetes.
We have solved #2 by deploying an internal docker registry [1], push to it once a build succeeds, and automatically (or not) trigger deployments once an expected image lands in it [2].
In the future we will support webhooks so you can trigger deployments from external registries such as quay.io or DockerHub.
There are a number of ways to do this. You could have your CI system create the new object manifest (say nodeapp1.yaml) completely with the image: nodeapp:$version. You could have your CI system do a simple sed, like;
sed -i -e 's^:latest^:'$VERSION'^' kube/myapp-deployment.yaml
Regardless of how you do that, then your CI/CD can run simple kubectl commands to deploy the new container;
One of the problems here I've yet to tackle is if the deployment fails and you're doing rolling updates you won't get an exit for the build to fail in CI/CD. I need to do a call to the kubernetes api to see which revision of the container/deployment is now running and compare it to the previously running version to fully automate CI/CD. Haven't had time to set this up but I will soon unless I discover a better method.
Edit: Also I just realized my numbers are 1, 1, 2, 3. Woops.
One of the engineers at CoreOS has a WIP demo that spins up a staging instance of each GitHub PR on Kuberenetes[0]. Here's an example PR [1]. I imagine the scope of this tool could be extended to a staging cluster after merge.
Setting up bare metal instances, even when using something like ansible is slow and cumbersome compared to what I'm now doing with k8s
Doing an apply/patch to kubernetes nodes takes seconds (assuming services aren't affected).
edit: Sorry, for the unfamiliar by services I mean load balancers/ingress, it's a k8s term. It takes 45s-1minute to update glbcs so I modify them as rarely as possible.