> However, this part of AI evolves very quickly. So given these are known problems, why shouldn't we expect rapid improvements in agentic AI systems for software development, to the point where software developers who stick with the old paradigm will indeed be eroded in time?
Because writing code has always been the easy part. A senior isn't someone who's better at writing code than a junior - they might well be worse at writing code. AI can now do the easy part, sure. What grounds does that present for believing that it's soon going to be able to do the hard part?
I don’t know what level of experience you have with agentic AI, but the frontier models are also really good at things like product management and data modeling. You can start with a description of the problem and end up with a really solid design plan that you can then give the AI to implement.
So yeah, if you’re starting with a “write me code that does X Y Z” then you aren’t getting the most out of these tools, because you’re right, that’s not the hard part.
Because writing code has always been the easy part. A senior isn't someone who's better at writing code than a junior - they might well be worse at writing code. AI can now do the easy part, sure. What grounds does that present for believing that it's soon going to be able to do the hard part?