Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Ultimately I think over the next two years or so, Anthropic and OpenAI will evolve their product from "coding assistant" to "engineering team replacement"

The way I see it, there will always be a layer in the corporate organization where someone has to interact with the machine. The transitioning layer from humans to AIs. This is true no matter how high up the hierarchy you replace the humans, be it the engineers layer, the engineering managers, or even their managers.

Given the above, it feels reasonable to believe that whatever title that person has—who is responsible for converting human management's ideas into prompts (or whatever the future has the text prompts replaced by)—that person will do a better job if they have a high degree of technical competence. That is to say, I believe most companies will still want and benefit if that/those employees are engineers. Converting non-technical CEO fever dreams and ambitions into strict technical specifications and prompts.

What this means for us, our careers, or Anthropic's marketing department, I cannot say.





That reminds me of the time where 3GL languages arrived and bosses claimed they no longer needed developers, because anyone could write code in those English-like languages.

Then when mouse-based tools like Visual Basic arrived, same story, no need for developers because anyone can write programs by clicking!

Now bosses think that with AI anyone will be able to create software, but the truth is that you'll still need software engineers to use those tools.

Will we need less people? Maybe. But in the past 40 years we have been increasing the developers productivity so many times, and yet we still need more and more developers because the needs have grown faster.


My suspicion is that it will be bad for salaries, mostly because it'll kill the "looks difficult" moat that software development currently has. Developers know that "understanding source code" is far from the hard part of developing software, but non-technical folks' immediate recoiling in the face of the moon runes has kept our profession pretty easy to justify high pay for for ages. If our jobs transition to largely "communing with the machines", then we'll go from a "looks hard, is hard" job, to a "looks easy, is hard" job, which historically hurts bargaining power.

I don't think "looks difficult" has been driving wages. FAANG etc leadership knows what's difficult and what's not. It's just marginal ROI. If you have a trillion-dollar market and some feature could increase that by 0.0001%, you hire some engineers to give it a try. If other companies are also competing for the same engineers for the same reasons, salaries skyrocket.

I wonder if the actual productivity changes won't end up mattering for the economics to change dramatically, but change in terms of a rebound in favour of seniors. If I was in school 2 years ago, looking at the career prospects and cost of living, I just straight up wouldn't invest in the career. If that happens at a large enough scale, the replenishment of the discipline may reduce, which would have an effect on what people who already had those skills could ask for. If the middle step, where wild magical productivity gains don't materialize in a way that reduces the need for expert software people who can reasonably be liable for whatever gets shipped, then we'll stick around.

Whether it looks easy or not doesn't matter as much imo. Plumbing looks and probably is easy, but it's not the CEOs job to go and fix the pipes.


I think this is the right take. In some narrow but constantly broadening contexts, agents give you a huge productivity edge. But to leverage that you need to be skilled enough to steer, design the initial prompt, understand the impact of what you produce, etc. I don't see agents in their current and medium term inception as being a replacement of engineering work, I see it as a great reshuffling of engineering work.

In some business contexts, the impact of more engineering labor on output gets capped at some point. Meaning once agent quality reaches a certain point, the output increase is going to be minimal with further improvements. There, labor is not the bottleneck.

In other business contexts, labor is the bottleneck. For instance it's the bottleneck for you as an individual: what kind of revenue could you make if you had a large team of highly skilled senior SWEs that operate for pennies on the dollar?

Labor will shift to where the ROI is highest is what I think you'll see.

To be fair, I can imagine a world where we eventually fully replace the "driver" of the agent in that it is good enough to fulfill the role of a ~staff engineer that can ingest very high level business context, strategy, politics and generate a high level system design that can then be executed by one or more agents (or one or more other SWEs using agents). I don't (at this point) see some fundamental rule of physics / economics that prevents this, but this seems much further ahead from where we are now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: