Whoa, don't quit your course because of a product announcement! That'd be overreacting by a lot. Please consider these points instead!
Firstly, it's not true that LLMs can structure and execute large scale changes in entire repositories. If you find one that can do that please let me know, because we're all waiting. If you're thinking of the Devin demo, it turned out on close inspection to be not entirely what it seemed [1]. I've used Claude 3 Opus and GPT-4 with https://aider.chat and as far as I know that's about as good as it gets right now. The potential is obvious but even quite simple refactorings or changes still routinely fox it.
Now, I've done some research into making better coding AIs, and it's the case that there's a lot of low hanging fruit. We will probably see big improvements ... some day. But today the big AI labs have their attention elsewhere, and a lot of ideas are only executable by them right now, so I am not expecting any sudden breakthroughs in core capabilities until they finish up their current priorities which seem to be more generally applicable stuff than coding (business AI use cases, video, multi-modal, lowering the cost, local execution etc). Either that or we get to the point where open source GPT-4+ quality models can be run quite cheaply.
Secondly, do not underestimate the demand for software. For as long as I've been alive, the demand for software has radically outstripped supply. GitHub claims there are now more than 100 million developers in the world. I don't know if that's true, because it surely captures a lot of people who are not really professional developers, but even so it's a lot of people. And yet every project has an endless backlog, and every piece of software is full of horrible hacks that exist only to kludge around the high cost of development. Even if someone does manage to make LLMs that can independently tackle big changes to a repository, it's going to require a very clear and precise set of instructions, which means it'll probably be additive. In other words the main thing it'd be applied to is reducing the giant backlog of tickets nobody wants to do themselves and nobody will ever get to because they're just not quite important enough to put skilled devs on. Example: any codebase that's in maintenance mode but still needs dependency updates.
But then start to imagine all the software we'd really like to have yet nobody can afford to write. An obvious one here is fast and native UI. Go look at the story that was on HN a day or two ago about why every app seems so inefficient these days. The consensus reason is that nobody can afford to spend money optimizing anything, so we get an endless stream of Electron apps that abuse React and consume half a gig of RAM to do things that Word 95 could do in 10MB. Well, porting a web app to native UI for Mac or Windows or Linux seems like the kind of thing LLMs will be good at. Mechanical abstractions didn't work well for this, but if you can just blast your way through porting and re-porting code without those abstractions, maybe you can get acceptably good results. Actually I already experimented with porting JavaFX FXML files to Compose Multiplatform, and GPT-4 could do a decent job of simple files. That was over a year ago and before multimodal models let it see.
There are cases where better tech does wipe out or fundamentally change jobs, but, it's not always the case. Programmer productivity has improved enormously over time, but without reducing employment. Often what we see when supply increases is that demand just goes up a lot. That's Jevon's Paradox. In future, even if we optimistically assume all the problems with coding LLMs get fixed, I think there will still be a lot of demand for programmers but the nature of the job may change somewhat to have more emphasis on understanding new tech, imagining what's possible, working out what the product should do, and covering for the AI when it can't do what's needed. And sometimes just doing it yourself is going to be faster than trying to explain what you want and checking the results, especially when doing exploratory work.
> Secondly, do not underestimate the demand for software. For as long as I've been alive, the demand for software has radically outstripped supply. GitHub claims there are now more than 100 million developers in the world.
And yet jobs are more difficult to come by than any time in recent history (regardless of skill or experience; excepting perhaps "muh AI" related roles), a seemingly universally expressed sentiment around these parts.
Firstly, it's not true that LLMs can structure and execute large scale changes in entire repositories. If you find one that can do that please let me know, because we're all waiting. If you're thinking of the Devin demo, it turned out on close inspection to be not entirely what it seemed [1]. I've used Claude 3 Opus and GPT-4 with https://aider.chat and as far as I know that's about as good as it gets right now. The potential is obvious but even quite simple refactorings or changes still routinely fox it.
Now, I've done some research into making better coding AIs, and it's the case that there's a lot of low hanging fruit. We will probably see big improvements ... some day. But today the big AI labs have their attention elsewhere, and a lot of ideas are only executable by them right now, so I am not expecting any sudden breakthroughs in core capabilities until they finish up their current priorities which seem to be more generally applicable stuff than coding (business AI use cases, video, multi-modal, lowering the cost, local execution etc). Either that or we get to the point where open source GPT-4+ quality models can be run quite cheaply.
Secondly, do not underestimate the demand for software. For as long as I've been alive, the demand for software has radically outstripped supply. GitHub claims there are now more than 100 million developers in the world. I don't know if that's true, because it surely captures a lot of people who are not really professional developers, but even so it's a lot of people. And yet every project has an endless backlog, and every piece of software is full of horrible hacks that exist only to kludge around the high cost of development. Even if someone does manage to make LLMs that can independently tackle big changes to a repository, it's going to require a very clear and precise set of instructions, which means it'll probably be additive. In other words the main thing it'd be applied to is reducing the giant backlog of tickets nobody wants to do themselves and nobody will ever get to because they're just not quite important enough to put skilled devs on. Example: any codebase that's in maintenance mode but still needs dependency updates.
But then start to imagine all the software we'd really like to have yet nobody can afford to write. An obvious one here is fast and native UI. Go look at the story that was on HN a day or two ago about why every app seems so inefficient these days. The consensus reason is that nobody can afford to spend money optimizing anything, so we get an endless stream of Electron apps that abuse React and consume half a gig of RAM to do things that Word 95 could do in 10MB. Well, porting a web app to native UI for Mac or Windows or Linux seems like the kind of thing LLMs will be good at. Mechanical abstractions didn't work well for this, but if you can just blast your way through porting and re-porting code without those abstractions, maybe you can get acceptably good results. Actually I already experimented with porting JavaFX FXML files to Compose Multiplatform, and GPT-4 could do a decent job of simple files. That was over a year ago and before multimodal models let it see.
There are cases where better tech does wipe out or fundamentally change jobs, but, it's not always the case. Programmer productivity has improved enormously over time, but without reducing employment. Often what we see when supply increases is that demand just goes up a lot. That's Jevon's Paradox. In future, even if we optimistically assume all the problems with coding LLMs get fixed, I think there will still be a lot of demand for programmers but the nature of the job may change somewhat to have more emphasis on understanding new tech, imagining what's possible, working out what the product should do, and covering for the AI when it can't do what's needed. And sometimes just doing it yourself is going to be faster than trying to explain what you want and checking the results, especially when doing exploratory work.
So, chin up!
[1] https://news.ycombinator.com/item?id=40010488