And you will have no idea whether the solution it presents to you is idiomatic or recommended or contains some common critical flaw or is hopelessly outdated. How can you find out? Back to alt-tabbing to the browser.
Sure it may take a bit more time to get going, but then you'll get it right the first time and learn something along the way. Your copilot example is just another iteration of copy-and-paste some random snippet from StackOverflow in the hope that it will work, but without having seen its context, like from when is the post and what comments, good or bad, did it get.
I'd actually be pretty afraid of a codebase that is created like that.
> You have no idea if the alternative code you would have written would have been idiomatic or had some critical flaw.
But I have a feeling for both, which is one of the key components of the skill in our trade.
For idionmatic code, I know the degree to which I'm following how things "should" be done or are "usually" done in a given language. If I'm uncertain, I know that. GPT won't tell me this. Worse, it will confidently claim things, possibly even if presented with evidence to the contrary.
For critical flaws, I know the "dark corners" of the code. Cases which were not obvious to handle or required some trick, etc. I'll test those specifically. With GPTs code, I have no idea what the critical cases are. I can read the code and can try to guess. But it's like outsourcing writing tests to a QA department. Never donna be as effective as the original author of the code. And if I can't trust GPT to write correct code, I can't trust it to write a good test for the code. So, neither the original author of the code (GPT) nor somebody external (me) will be able to test the result properly
I mean... I certainly know which languages I can write idiomatic code in and which I cannot.
I can't know that my code will be free of critical flaws, but I do understand the common sources of flaws and techniques to avoid them, and I'm quite confident I can build small features like this that simply aren't vulnerable to SQL injection, on the first try and without requiring fuzzers or code review: https://infosec.exchange/@malwaretech/110029899620668108
I'm confident enough in most languages I write in to recognize correct code. But I am not usually so familiar that I can conjure the exact syntax for many specialized things I need. Copilot is just a much quicker way to get what I need without looking it up.
You don’t have to accept the suggestions as-is. It’s just code, you can edit it as much as you like.
Getting a good idiomatic starting point is a great boost.
Watch the demos where they provide GPT-4 with an API for performing search queries and calculations. These tool integrations are the next step and they will include specialized ones for using language and library docs. They could also be given access to your favourite books on code style or have access to a linter that they could use to cleanup and format the code before presenting it. The model is capable of using these tools itself when it is set up with the right initial prompt. Even now Copilot is pretty good at copying your code style if there is enough code in the repo to start with.
It is. I can see that it was written in 2003 and discard it. GPT won't tell me if its answer is based on an ancient lib version.
Essentially, GPT is that rando's webpage but with the metadata stripped away that allowed me to make judgement calls about its trustworthyness. No author, no time, no style to see if somebody is trolling.
Sure it may take a bit more time to get going, but then you'll get it right the first time and learn something along the way. Your copilot example is just another iteration of copy-and-paste some random snippet from StackOverflow in the hope that it will work, but without having seen its context, like from when is the post and what comments, good or bad, did it get.
I'd actually be pretty afraid of a codebase that is created like that.