Hacker Newsnew | past | comments | ask | show | jobs | submit | zsoltkacsandi's commentslogin

> As a programmer, I want to write more open source than ever, now.

I believe open source will become a bit less relevant in it’s current form, as solution/project tailored libraries/frameworks can be generated in a few hours with LLMs.


€699 + €9.99/month for a smartphone that supposed to be a dumb phone, with a custom locked down OS. Punkt is losing the point what made their previous products successful.

I have my iphone 13 mini for 4 years, having a Punkt MC03 for the same amount of time would cost me €1058. Thanks, but no thanks.


I have been using the most recent Claude, ChatGPT and Gemini models for coding for a bit more than a year, on a daily basis.

They are pretty good at writing code *after* I thoroughly described what to do, step by step. If you miss a small detail they get loose and the end result is a complete mess that takes hours to clean up. This still requires years of coding experience, planning ahead in head, you won't be able to spare that, or replace developers with LLMs. They are like autocomplete on steroids, that's pretty much it.


Yes what you are describing is exactly what Kiro solves


> Through Kiro, we reinvented how developers work with AI agents.

Even according to it’s documentation it is still built for developers, so my point still stands. You need dev experience to use this tool, same as other LLM-based coding tools.


Everything in infrastructure is a set of trade-offs that work in both directions.

If you want better monitoring, metrics, availability, orchestration, logging, and so on, you pay for it with time, money, and complexity.

If you can't justify that cost, you're free to use simpler tools.

Just because everyone sets up a Kubernetes / Prometheus / ELK stack to host a web app that would happily run on a single VPS doesn't mean you need to do the same, or that nowadays this is the baseline for running something.


> Thinking of Kubernetes as a runtime for declarative infrastructure instead of a mere orchestrator results in very practical approaches to operate your cluster.

Unpopular opinion, but the source of most of the problems I've seen with infrastructures using Kubernetes came from exactly this kind of approach.

Problems usually come when we use tools to solve things that they weren't made for. That is why - in my opinion - it is super important to treat a container orchestrator a container orchestrator.


Kubernetes is explicitly designed to do what the article describes. In that respect the article is just describing what you can find in the standard Kubernetes docs.

> it is super important to treat a container orchestrator a container orchestrator.

Which products do you think are only “container orchestrators”? Even Docker Compose is designed to achieve a desired state from a declarative infrastructure definition.


> Which products do you think are only “container orchestrators”? Even Docker Compose is designed to achieve a desired state from a declarative infrastructure definition.

The way how something describes the desired state (declaratively for example) has nothing to do with if it is a container orchestrator or not.

If you open the Kubernetes website, do you know what is the first thing you will see? "Production-Grade Container Orchestration". Even according to their own docs, Kubernetes is a container orchestrator.


I feel like the author has a good grasp of the Kubernetes design... What about the approach is problematic? And why don't you think that is how Kubernetes was designed to be used?


I wrote some personal stories below in this thread as a response to another user.


But then you need two different provisioning tools, one for infra in k8s, and one for infra outside k8s. Or perhaps using non-native tools or wrappers.


> But then you need two different provisioning tools, one for infra in k8s, and one for infra outside k8s.

Yes, and 99% of the companies do this. It is quite common to use Terraform/AWS CDK/Pulumi/etc to provision the infrastructure, and ArgoCD/Helm/etc to manage the resources on Kubernetes. There is nothing wrong with it.


It would have helped if you tell us why you don’t like this approach.


It's right there:

> the source of most of the problems I've seen with infrastructures using Kubernetes came from exactly this kind of approach

But some more concrete stories:

Once, while I was on call, I got paged because a Kubernetes node was running out of disk space. The root cause was the logging pipeline. Normally, debugging a "no space left on device" issue in a logging pipeline is fairly straightforward, if the tools are used as intended. This time, they weren't.

The entire pipeline was managed by a custom-built logging operator, designed to let teams describe logging pipelines declaratively. The problem? The resource definitions alone were around 20,000 lines of YAML. In the middle of the night, I had to reverse-engineer how the operator translated that declarative configuration into an actual pipeline. It took three days and multiple SREs to fully understand and fix the issue. Without such a declarative magic it takes usually 1 hour to solve such an issue.

Another example: external-dns. It's commonly used to manage DNS declaratively in Kubernetes. We had multiple clusters using Route 53 in the same AWS account. Route 53 has a global API request limit per account. When two or more clusters tried to reconcile DNS records at the same time, one would hit the quota. The others would partially fail, drift out of sync, and trigger retries - creating one of the messiest cross-cluster race conditions I've ever dealt with.

And I have plenty more stories like these.


You mention a questionably designed custom operator and an add-on from a SIG. This is like blaming Linux for the UI in Gimp.


> a questionably designed custom operator

This is the logging operator, the most used logging operator in the cloud native ecosystem (we built it).

> This is like blaming Linux for the UI in Gimp.

I never blamed anything, read my comment again. I only pointed out that problems arise when you use something to do something that is not built for. Like a container orchestrator managing infrastructure (DNS, logging pipelines). That is why I wrote to "it is super important to treat a container orchestrator a container orchestrator". Not a logging pipeline orchestrator, or a control plane for Route 53 DNS.

This has nothing to do with Kubernetes, but with the people who choose to do everything with it (managing the whole infrastructure).


Also not like logging setups outside of k8s can't be a horror show too. Like, have you ever had to troubleshoot a rsyslog based ELK setup? I'll forever have nightmares from debugging RainerScript mixed with the declarative config and having to read the source code to find out why all of our logs were getting dropped in the middle of the night.


I'd also argue the whole external DNS thing could have happened with any dynamic DNS automation... And yes it is a completely optional add-on!


> tiny project for building a tiny Linux distribution

I am working something similar in Go, and writing an educative blog post series about it: https://serversfor.dev/linux-inside-out/


I'm enjoying the articles! I went through the exercise and it was my first time running my own executable on PID 1. That was fun and educational.


> Economists say that a typical middle-class family today is richer than one in the 1960s.

My mother was a nurse and my father a warehouse worker. In their early 30s, on minimum wage, they were able to buy an apartment and still save enough to afford another one.

My wife and I are both software developers. We work longer hours, are far more formally educated, and have to continuously study outside of work just to stay employable in a rapidly changing industry. I earn a high salary at a large U.S. company, with bonuses and stock compensation. Yet as we approach 40, we still haven't reached the level of financial security my parents achieved decades earlier.

Nominally, our incomes are higher, but real purchasing power, access to housing, and long-term security have eroded, even as the demands on our time, education, and productivity have increased.


One of the worst things IMHO is a loss of... hard to explain, but optionality? When the price of shelter is high and the need for occupational specialisation is also high, you feel more "trapped", situationally. It seems like people used to be able to YOLO things a lot more.


I've operated both self-hosted and managed database clusters with complex topologies and mission-critical data at well-known tech companies.

Managed database services mostly automate a subset of routine operational work, things like backups, some configuration management, and software upgrades. But they don't remove the need for real database operations. You still have to validate restores, build and rehearse a disaster recovery plan, design and review schemas, review and optimize queries, tune indexes, and fine-tune configuration, among other essentials.

In one incident, AWS support couldn't determine what was wrong with an RDS cluster and advised us to "try restarting it".

Bottom line: even with managed databases, you still need people on the team who are strong in DBOps. You need standard operating procedures and automation, built by your team. Without that expertise, you're taking on serious risk, including potentially catastrophic failure modes.


I've had an RDS instance run out of disk space and then get stuck in "modifying" for 24 hours (until an AWS operator manually SSH'd in I guess). We had to restore from the latest snapshot and manually rebuild the missing data from logs/other artifacts in the meantime to restore service.

I would've very much preferred being able to SSH in myself and fix it on the spot. Ironically the only reason it ran out of space in the first place is that the AWS markup on that is so huge we were operating with little margin for error; none of that would happen with a bare-metal host where I can rent 1TB of NVME for a mere 20 bucks a month.

As far as I know we never got any kind of compensation for this, so RDS ended up being a net negative for this company, tens of thousands spent over a few years for laptop-grade performance and it couldn't even do its promised job the only time it was needed.


In the post it is set to 0. `CGO_ENABLED=0 go build -o init .`

The only reason is because I like to be explicit, and I could not know what was set before in the user's environment.


The goal was to strip away most of the complexities (including C), to make the topic more approachable for a broader audience.

Go seemed a perfect fit, it is easy to pick up the syntax and see what is going on, but you can still be close to the OS.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: