> But ChatGPT isn’t a step along the path to an artificial general intelligence that understands all human knowledge and texts; it’s merely an instrument for playing with all that knowledge and all those texts.
No arguments provided in support of this claim.
Clearly ChatGPT has serious limitations, and it's "pretty dumb" in many respects. At the same time it's a miracle that ChatGPT works as well as it does. It translates, it summarizes, it writes poems, it explains science concepts, it solves puzzles. And it's a general purpose thing that does it all. It's not AGI, obviously, but is it a step towards AGI? Maybe!
People underestimate how impactful incremental improvements to this tech will be in the next couple of years. It will get way, way, better even in the absence of conceptual breakthroughs.
The evidence comes from a knowledge of how LLMs work. By definition, they have no understanding of any of the content they scan or present. They are simply typing the next most likely word statistically.
The fact that these compositions look real comes from their statistical similarly to existing writing on the prompted topics. That's not intelligent and it cannot evolve or progress to having understanding or intelligence.
LLMs aren't intelligent but if they can further --human-- understanding of what intelligence is, then that is sufficient. Take the Turing test for instance. Today, thanks to LLMs, we know that a machine can fool an operator into thinking it's intelligent over text, without actually understanding anything. 10 years ago we would never have believed that.
OpenAI is building stuff like Dall-E and GPT-3 is to further their understanding of AI, in order to get closer to AGI. It's the entire point of the thing.
No, they cannot further human understanding. They can only read and write existing text faster than you or I can.
Something that acts like a parrot and demonstrates no intelligence cannot be used to learn about intelligence.
The business model for ChatGPT should be obvious. It's stated at the bottom of every page. They will sell a conversational chat agent for use in customer service and tech support. That's useful and real and meets the Turing test criteria you're talking about.
I suspect the author used chatGPT to generate some of the arguments against it. As always there will be downplayers and critics as they really need to justify Their existence. But they are only talk not actions.
TLDR; ChatGPT is dumber than the author thinks, because the author does not understand what ChatGPT is and how it works. The author also assumes the reader suffers from the same lack of information.
No arguments provided in support of this claim.
Clearly ChatGPT has serious limitations, and it's "pretty dumb" in many respects. At the same time it's a miracle that ChatGPT works as well as it does. It translates, it summarizes, it writes poems, it explains science concepts, it solves puzzles. And it's a general purpose thing that does it all. It's not AGI, obviously, but is it a step towards AGI? Maybe!
People underestimate how impactful incremental improvements to this tech will be in the next couple of years. It will get way, way, better even in the absence of conceptual breakthroughs.