Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What? Setting up a load balancer or nginx is considered complex now?

It's not complex to set up a load balancer in a given specific environment. But it's another kind of ask to say "set up a load balancer, but also make it so that the load balancer also exists in future dev environments that can be auto-set-up and auto-teared-down. And also make it so that load balancer will work on dev laptops, AWS, Azure, google, our private integration test cluster on site, and on our locally-hosted training environment, with the same configuration script." All of these things can be done in k8s, and basically are by default when you add your load balancer in k8s. They can be done other ways, too, or just ignored and not done, also. But k8s offers a standardized way to approach these kinds of things.



In my mind standardisation is how we really solve problems in much of software development. Everything before that is just practice and/or hacks.


> In my mind standardisation is how we really solve problems in much of software development.

I've been having this thought very often lately.

The only way for humans to do something faster is to use a machine. Any machine is built on some assumption that something is repeatedly true, that some things can be repeatedly interacted with in the same way.

Finding true invariants is very hard, but our world is increasingly malleable. Over time it is getting easier to invent new invariants and pad things out so that the invariant holds.


It's true not just for machines but engineering in general. Whether it's civil or mechanical or electronic or semiconductor engineering, their foundation is built on setting boundary conditions to make the natural world predictable so that it can be reliably manipulated. Things most often go wrong when those conditions are poorly understood, constrained, or modeled such as when using an unproven material, using imprecise parts, or ignoring thermal expansion when designing structural components.

Engineers have a plethora of quality control standards and centuries of built up knowledge to make this chaos manageable and the problems tractable.


I'm fine with standardization, as long as I set the standard. For standard 0, I propose that the only spelling for "standardization" will be with a 'z' (which is itself pronounced "zee").


American here too -- but don't think it's reasonable to demand American spelling from the rest of the Anglophone world.


I think the downvotes are missing the point. This is a key problem with standards: sometimes they standardize around something unreasonable. And tech is already riddled with all sorts of standards which are half-implemented in n different ways.

I think a better approach would be to have a specification for more robust negotiation protocols. When I see "standardisation," I already know that this means the same thing as "standardization" and furthermore that I should expect to see "colour"/"honour," organizations referred to in plural, "from today" rather than "beginning today" or "starting today," and even "jumpers" over "sweaters," "lorries" over "trucks," "biscuits" over "cookies," and more interrogative sentences in conversation. A British English speaker likely does the same process in reverse.


Perhaps I should clarify that "demanding American spelling is unreasonable" was specific to the context of HN (or other open discussion forums with international readership).

Within say, the volunteer-maintained documentation of MDN, the tradeoffs are quite different. There, ease of reading for reference by busy coders is much more valuable relative to ease of typing up a new contribution. Frequent switches between "color" and "colour" become a time-wasting distraction.

MDN should pick a standard and insist on it. And if Ubuntu chooses to require British spelling throughout, I'd say that's good.


Some of those requirements are far-fetched. Multiple cloud environments AND on-prem?

Ansible and Vagrant are not perfect, but I think they are far simpler than a single node k8s instance, and more representative of an actual production environment.


I’ve seen my company go multi cloud provider just to appeal to a single client. Now we’ll need multi cloud, multi continent setup to handle European clients. And I’m sure in another 2 years we’ll need our whole stack in China to support another clients requirements.

This is not my strength in any way, but hearing from those teams, Kubernetes will be a godsend




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: