This is because LLMs do not reason. They pattern-fit. The fact that that solves a lot of things humans often use reason to solve most likely speaks to the training data or unrecognized patterns in standardized tests, not to LLMs reasoning capability, which does not exist.
To excuse their assumption of reasoning capabilities, the author in the FAQ snarkily points to “research” indicating evidence of reasoning—all of which was written by OpenAI and Microsoft employees who would not be allowed to publish anything to the contrary.
It’s a shame people continue to buy into the hype cycle on new tech. Here’s a hint: if the creators of VC-backed tech make extraordinary claims about it, you should assume it’s heavily exaggerated if not an outright lie.
To excuse their assumption of reasoning capabilities, the author in the FAQ snarkily points to “research” indicating evidence of reasoning—all of which was written by OpenAI and Microsoft employees who would not be allowed to publish anything to the contrary.
It’s a shame people continue to buy into the hype cycle on new tech. Here’s a hint: if the creators of VC-backed tech make extraordinary claims about it, you should assume it’s heavily exaggerated if not an outright lie.