As I said,sometimes, especially if you ask some simple question that is pretty easily verifiable fact pn any search engine. Claude gave me nonsense links whole summer after some update and nothing says ChatGPT won’t do the same after some future ”improvement”. Besides, more you veer towards questions that ate not so cleacut (”I want to make an LLM application that mimicks sounds Brazilian sounds make running on open source model, how many parametres does it need and what model should I use and should I use React or Svelte for frontend”) more fuzzy the resukts. And more longer the the chat, more tighter its context window becomes and more it hallucinates.
Point being: no you cannot trust it withput double checking its information from elsewhere. Same as with anything else.
The whole point of a cited source is that you read the source to verify the claim. Amazing how many people in this thread seem to not let this little detail get in the way of their AI hate.
> The whole point of a cited source is that you read the source to verify the claim. Amazing how many people in this thread seem to not let this little detail get in the way of their AI hate.
I like that you read all the citations in your concrete example of how good chat gpt is at citations and chose not to mention that one of them was made up.
Like you either would have seen it and consciously chose not to disclose that information or you asked a bot a question, got a response that seemed right, and then trusted that the sources were correct and posted it. But there’s no chance of the latter happening though because you specifically just stated that that’s not how you use language models.
On an unrelated note what are your thoughts on people using plausible-sounding LLM-generated garbage text backed by fake citations to lend credibility to their existing opinions as an existential threat to the concept of truth or authoritativeness on the internet?
I use LLMs all the time and have since they first became so I don’t hate them. But I do know they are just tools with limitations. I am happy that ChatGPT has better sitarions these days, but I still do not trust it with anything important without double-checking several places. Besides, the citation itself can be some AI generated blog post with completely wrong information.
This tooks have limitations. Sooner we accept it,sooner we learn to better use them.
Says “Page Not Found”. From a technical standpoint how do you think that happened? Personally I think it is either the result of a hallucination or the chat bot actually did a web search, found a valid page, and then modified the URL in such a way that broke it before sending it to you.