LLMs are so very good at emitting plausible, authoritative-sounding, and clearly stated summaries of their training data. And if you ask them even fundamental questions about a subject of which you yourself have knowledge, they are too often astonishingly and utterly incorrect. It's important to remember this (avoiding "Gell-Mann amnesia"!) when looking at "AI" search results for things that you don't know -- and that's probably most of what you search for, when you think about it. I.e., if you indignantly flung Bill Bryson's book on the English language across the room, maybe you shouldn't take his book on general science too seriously later.
"AI" search results would perhaps be better for all of us if, instead of having perfect spelling and usage, and an overall well-informed tone, they were cast as transcriptions of what some rando at a bar might say if you asked them about something. "Hell, man, I dunno."
A coworker of mine recently ran into this. Had they listened to the AI they'd have committed tax fraud.
The AI very confidently told them that a household with 2 people working could have 1 person with a family HSA and the other with an individual HSA (you cannot).
"AI" search results would perhaps be better for all of us if, instead of having perfect spelling and usage, and an overall well-informed tone, they were cast as transcriptions of what some rando at a bar might say if you asked them about something. "Hell, man, I dunno."