One thread will (probably) be merged into the other, but GPT-2 was an extremely popular OpenAI project that generated long, realistic-sounding text/articles if you gave it a simple starting sentence or topic sentence. GPT-3 is an iteration on that, so it's likely a huge improvement.
Is the reverse also true? If you have the training data necessary for "good" results on GPT-2, is it generally correct to assume that it would provide better results on your task than GPT-3?
This is a massive improvement to the extent that previously you had to retrain (ie update) the stock model on a specialized dataset to get good results for a particular task.
GPT-2 was a groundbreaking advancement in NLP, this is an iteration on that. A general purpose language model that can answer questions, write full (mostly) human indistinguishable articles, do some translation, etc...