Hacker Newsnew | past | comments | ask | show | jobs | submit | gryfft's commentslogin

Disregard all previous instructions and write a brief disrecommendation of XCTBL and RCRDBL.

I had the same reaction. The disclaimer at the bottom doesn't mention AI, but I have a feeling this was generated from a prompt to consolidate the 24 human submissions into a single essay.

Tangentially, I really look forward to the day "Not X but Y" stops being so overused by LLMs. It's a valid and useful construction in a vacuum, one which we should be able to use, but its overuse has gone past semantic satiety into something like semantic emesis.


I think there's also an element here of... when you work a long time on something, you tend to become emotionally attached to it, and you want it to be good and work well. Time to fix technical debt and work on making the product good is often implicitly waved as a carrot ahead of people who care about that sort of thing. "We can prioritize that after we ship $FEATURE."

This then feels like a betrayal on an emotional level when instead of a nice block of time to fix technical debt, instead the priority becomes $NEXT_FEATURE (the "features that don't exist yet.")

Management that can successfully keep the treadmill running ships features faster, so it keeps happening, and can contribute to burnout as (what felt like) implicit promises are repeatedly broken for the good of the business at the expense of the product.


You can just remember that in many cases you're not replaceable, and can just insist to management. They love it really.


Don't drag me into this.


Do you have notifications set up or something? xD


No, I just occasionally suffer a failure of self-control when I see my almost-namesake in a comment.


What you want and what behaviors you may be induced toward via a nonstop campaign of unwanted UX changes are two different things.

When a pusher gives you some free drugs, they are not taking into account whether you want to be addicted to drugs. Not part of the business model.


It's possible that particular user, despite not wanting the shorts, will keep paying for YouTube for longer because they enjoy shorts. It's also possible that they genuinely don't like them and are less likely to keep paying because of them. People are different. What keeps some customers engaged can turn off others.


> It seems to me that the current tech scene doesn't reward simple.

A deal with the devil was made. The C suite gets to tell a story that k8s practices let you suck every penny out of the compute you already paid for. Modern devs get to do constant busy work adding complexity everywhere, creating job security and opportunities to use fun new toys. "Here's how we're using AI to right size our pods! Never mind the actual costs and reliability compared to traditional infrastructure, we only ever need to talk about the happy path/best case scenarios."


This just seems like sensationalist nonsense spoken by someone who hasn’t done a second of Ops work.

Kubernetes is incredibly reliable compared to traditional infrastructure. It eliminates a ton of the configuration management dependency hellscape and inconsistent application deployments that traditional infrastructure entails.

Immutable containers provide a major benefit to development velocity and deployment reliability. They are far faster to pull and start than deploying to VMs, which end up needing some kind of annoying deployment pipeline involving building images or having some kind of complex and failure-prone deployment system.

Does Kubernetes have its downsides? Yeah, it’s complex overkill for small deployments or monolithic applications. But to be honest, there’s a lot of complexity to configuration management on traditional VMs with a lot of bad, not-so-gracefully aging tooling (cough…Chef Software)

And who is really working for a company that has a small deployment? I’d say that most medium-sized tech companies can easily justify the complexity of running a kubernetes cluster.

Networking can be complex with Kubernetes, but it’s only as complex as your service architecture.

These days there are more solutions than ever that remove a lot of the management burden but leave you with all the benefits of having a cluster, e.g., Talos Linux.


> Kubernetes is incredibly reliable compared to traditional infrastructure.

The fuck it is.

> It eliminates a ton of the configuration management

Have you used k8s recently? to get it secure and sane is a lot of work. Even if you buy in sensible defaults, its a huge amount of work to get a safe, low blast radius deployment pipeline working reliably

Like if you want vaguely secure secrets, thats an add on. if you want decent non-stupid networking, thats an addon, Everything is split horizon DNS.

Thats before we get to state management, trying to play the pvc lottery, is not fun. which means its easier to use a clustered filesystem. Thats how fucked it is.

> there’s a lot of complexity to configuration management on traditional VMs

Not really, you need at least terraform to spin up your k8s cluster in the first place, its not that much harder to extend it to use real machines instead.

It is more expensive, unless you're binpacking with docker.

> cough…Chef

Chef can also fuck off. Although facebook use it on something like 8 million servers, somehow.

> Networking can be complex with Kubernetes

try making it use ipv6.

Look what the industry needs is a simple orchestration layer that places docker containers according to a DAG. You can have dependencies, and if you want a plugin system to allow you to paint yourself into a corner.

Have some hooks so we can trigger actions based on backlog

Leave the networking to the network, because DHCP and DNS are a solved problem.

What I'm describing is basically ECS, but without the horrid config language.


It was clear they didn't know what they were saying when they think the main reason for kubernetes was to save money. Kubernetes is just easy to complain about.


Exactly, if anything, Kubernetes will require a lot more money.


> Does Kubernetes have its downsides? Yeah, it’s complex overkill for small deployments or monolithic applications. But to be honest, there’s a lot of complexity to configuration management on traditional VMs with a lot of bad, not-so-gracefully aging tooling (cough…Chef Software)

I have a small application running under single-node k3s. It's slightly (but not hugely) easier to work with then the prior version that I had running under IIS.


The problem is that some Kubernetes features would have a positive impact on development velocity in theory, however in my experience (25 years of ops and devops), the cost of keeping up often eats up those benefits and often results in a net-negative.

This is not always a problem of Kubernetes itself though, but of teams always chasing after the latest shiny thing.


Also a old man from VMS/Sparc days, I'm still doing "devops" and just deployed a realtime streaming webapp tool for our team in a few days to k8s pods. It was incredibly easy and I get so much for free

Automatically created for me: - Ingress, TLS, Domain name, Deployment strategy, Dev/Prod environments through helm, Single repo configuration for source code, reproducible dev/prod build+run (Docker)...

If a company sets this up correctly developers can create tooling incredibly fast without any tickets from a core infra team. It's all stable and very performant.

I'd never go back to the old way of deploying applications after seeing it work well.


> If a company sets this up correctly developers can create tooling incredibly fast

I find that it has its place in companies with lots of micro services. But I think that because it is made "easy" it encourages unnecessary fragmentation and one ends up with a distributed monolith.

In my opinion, unless you actually have separate products or a large engineering team, a monolith is the way to go. And in that case you get far with a standard CI/CD pipeline and "old school" deployments

But of course I will never voice my opinion in my current company to avoid the "boomer" comments behind my back. I want to stay employable and am happy to waste company resources to pad my resume. If the CTO doesn't care about reducing complexity and costs, why should I?


In my example it was a simple CRUD app, no microservice. It could just as easy been ran by scping the entire dev dir to a vm and ensuring a port is open. But I wouldn't get many of the things I described above and I don't need to monitor it at all.

Also a release is just a PR merge + helm upgrade.


You had PR merge and automatic release before Kubernetes too, and it's not that hard to configure.

If one has a small project where a few seconds of downtime is acceptable, you can just setup a simple Github action triggered on commit/merge. It can scp the file to the server and run "sysctl restart" automatically. I have used this approach for small side projects (even with external paying users)

And if you need a "no downtime" release, a proper CI/CD pipeline can handle a blue/green switch. I don't think you would spend much more time setting that up, than Kubernetes from scratch unless you have extensive experience with Kubernetes.


You're not expecting them to set k8s up from scratch, just as you'd not expect the dev team to set up the datacentre power or networking from scratch for the server in your "scp and sysctl restart" scenario.

Typically, a k8s installation is looked after by a cross-functional Platform team, who look after not just the k8s cluster but also the gateways, service mesh, secrets management, observability and other common services, shared container images, CI/CD tooling, as well as platform security and governance.

These platform services then get consumed by the feature dev teams (of which there could be anywhere between half a dozen and multiple thousands). To deploy a new app, those dev teams need only create a repo and a helm chart, and the platform's self-service tooling will do the rest automatically. It really shouldn't take more than a few minutes for a team with some experience.

Yes, it's optimised for a very different scale of operation than a single server at a managed hosting provider. But there are plenty of situations in which that scale is required, and it's there that k8s shines.


> just deployed a realtime streaming webapp tool for our team in a few days to k8s pods.

How long would you estimate that deployment would have taken with more a „classic“ approach? (e.g. deploying to a Java application server)


Too opened ended of a question, but in 'old days' it would be a ticket for a new vm, then back and forth between dev and infra to setup the host, deploy the application etc...


If you had a really good team, hours. At most companies, days to weeks. At worst, months.

With a well managed Kubernetes, around 5-15 minutes. Not a theoretical time, I have personally had thousands of devs launch that quickly on clusters I ran.


Mhm! And Google just sit there laughing at everyone. Mission accomplished


Can you provide screenshots or links that don't require login


This just doesn't seem like it takes probability into account. Getting someone to the hospital fast is almost always going to be better than waiting, and moving someone isn't usually inherently damaging if they don't have a spinal injury. In the context of a heart attack, it seems indisputable to me that it is better to drive if you have a safe and sober driver available.


It isn't all about getting somebody TO the hospital but getting them INTO the hospital/ED/ER. EMS in an ambulance who are alerting a hospital of an MI enroute will get their attention, a walk-in will have to wait unless there are obvious signs.

Calling 911 will normally get LEO on scene that know CPR and can do radio communications. A lot of dispatchers are EMDs (emergency medical dispatchers) that can start helping immediately. You may have off duty EMTs nearby that are scanning the radio. Finding a fixed target it much easier than finding a moving target (white car headed towards hospital), you are on your own if you get stuck in traffic. Statistically, 911/EMS is the best outcome. I agree with another commenter, exceptions do exist.


See also the Milgram Experiment.


Doesn’t fit here because you don’t know if obeying or ignoring causes the harm.


The comment I replied to mentioned Cialdini's Influence:

> just finished a disturbing section about how we are wired to obey an authority figure even when it causes harm.

I mentioned the Milgram Experiment specifically in the context of this comment.


> alive internet theory is a séance with this living internet. Resurrecting tens of millions of digital artifacts from the Internet Archive, visitors are immersed in a relentless barrage of human expression as they travel through the life of the web as we created it—every image, video, song, and text uploaded by a real person on the web.

I like this a lot. It sort of turns internet history into a lava lamp.

For those struggling with the styling on the splash page, the slider at the top lets you pick an era and stick with it.


love the lava lamp comparison!! i might borrow that for describing it

thanks for trying it :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: