Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can't decide if I think this is a good idea or not. Conceptually I like that I can get a s3 bucket/rds db/sqs queue by using kubectl but I'm not sure if that's the best way to manage the lifecycle, especially on something like a container registry, that likely outlives any given k8s cluster.


S3 buckets and RDS instances probably also outlive your cluster if you're treating them as cattle. I want to make sure I don't delete my RDS instance while I'm doing a blue/green cluster upgrade. I can see defining SQS next to your application being handy (they're somewhat stateful while there are messages in the queue but I'm sure you can do something to ensure they're drained before destroying them).


Maybe it makes sense for development, short-lived resources or cattle. Don't start about cross-cluster use though, having to think about where a resource is defined/used is a mess.

The same issue is with Service Catalog and the open service broker model. AWS ACK is way better in its expressiveness, in comparison. (one crd per resource) It is more difficult to generate them though, compared to simply adding existing brokers.


I agree. There are also cases where many Kubernetes clusters in different regions consume the same cloud service like RDS Cluster. So, tight coupling of cloud service (assumed to be stateful in most cases) and workloads doesn't work in some scenarios.

In Crossplane, our primary scenario is to have a dedicated small cluster for all the orchestration of infrastructure and many app clusters that will consume resources form that central cluster. So, clusters with workloads come and go but the one with your infra stays there and serves as control plane. See the latest design about this workflow: https://github.com/crossplane/crossplane/blob/master/design/... I'd be happy to hear your feedback on the design.

Disclaimer: I'm a maintainer at Crossplane.


Why are these clusters going away?


We rebuild ours all the time. New config, k8s version upgrade, node OS patching.


Interesting. I'm only familiar with Mesos/Aurora, which is often considered outdated next to Kubernetes, but it can do all those things in place.

Do you end up with a "meta-kubernetes" to deploy kubernetes clusters and migrate services between them?


You definitely can do the same with Kubernetes too, just that the scope is too large and it doesn't have a good reputation with rolling updates of controlplane.

> Do you end up with a "meta-kubernetes" to deploy kubernetes clusters and migrate services between them?

Congratulations, you just discovered the ClusterAPI


The management cluster is definitely a pattern. Cluster API is designed to have a management cluster to build workload clusters across region and provider.

https://cluster-api.sigs.k8s.io/


People work in different ways. We do in place upgrades across all components. In fact that's sort of why we use k8s in the first place.

Also, I have an allergy to complexity, so I need fewer moving parts for health reasons.


Most of the times it is because of fear of breaking something during a rolling update. So people usually create a new cluster from scratch and slowly migrate one by one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: