> Think of the amount of boilerplate and tests you write, and tedious API documentation lookups you do daily; that all goes away with GPT.
At work we have really worked hard to minimise boilerplate and manually-written/repetitive tests, so I don't write much of that. Getting GPT to write it would certainly be worse: we would still have the deadweight of boilerplate/repetition even if we didn't have to write it, and some of it would be incorrect. Maybe this varies a lot by company — if you're often writing a lot of repetitive code, and for whatever reason you can't fix the deeper issues, then something like GPT/Copilot could be a godsend.
About documentation lookups, I don't know if this varies by language, but I've had very little luck with using GPT for this. For the languages I use regularly, I can find anything I need in the documentation very rapidly. When I've tried to use GPT to answer the same questions, it occasionally gives completely wrong answers (wasting my time if I believe it), and almost always misses out some subtlety that turned out to be important. It just doesn't seem to be very good for this purpose yet.
> At work we have really worked hard to minimise boilerplate and manually-written/repetitive tests, so I don't write much of that.
There's boilerplate in any codebase, even if you make an effort to minimize it. There are always patterns, repeated code structure, CI and build tool configuration, etc.
If nothing else, just being able to say "write a test for this function", which covers all code paths, mocking, fuzzing, etc., would be a huge timesaver, even if you have to end up fixing the code manually. From what I've seen, this is already possible with current tools; imagine how the accuracy will improve with future generations. Today it's not much different from reviewing code from a coworker, but soon you'll just be able to accept the changes with a quick overview, or right away.
This may be highly dependent on problem domain or programming language (see the other article about GPT tending to hallucinate any time it is given problems that don't exist in its training set). My experience has mostly been that the output (including simple stuff like "test this function", though we generally avoid unit tests due to low benefit and high cost) is consistently so flawed that the time to fix it approaches the time to write it.
At work we have really worked hard to minimise boilerplate and manually-written/repetitive tests, so I don't write much of that. Getting GPT to write it would certainly be worse: we would still have the deadweight of boilerplate/repetition even if we didn't have to write it, and some of it would be incorrect. Maybe this varies a lot by company — if you're often writing a lot of repetitive code, and for whatever reason you can't fix the deeper issues, then something like GPT/Copilot could be a godsend.
About documentation lookups, I don't know if this varies by language, but I've had very little luck with using GPT for this. For the languages I use regularly, I can find anything I need in the documentation very rapidly. When I've tried to use GPT to answer the same questions, it occasionally gives completely wrong answers (wasting my time if I believe it), and almost always misses out some subtlety that turned out to be important. It just doesn't seem to be very good for this purpose yet.