If the AI is correct 90% of the time, you can be reasonably sure it will be correct next time. That's a rational expectation. If you are at a high stake situation, then even a 1% rate of false positive is too high and you should definitely apply some verifications. Again, I don't see the danger.
Ultimately I think the danger is that the AI sounds like it knows what it’s talking about. It’s very authoritative. Anyone who presents content at that level of detail with that level of confidence will be convincing.
You can hear doubt when a presenter isn’t certain of an answer. You can see the body language. None of that is present with an AI.
And most people don’t know/care enough to do their own research (or won’t know where to find a more reliable source, or won’t have the background to evaluate the source).
> You can hear doubt when a presenter isn’t certain of an answer. You can see the body language. None of that is present with an AI.
This is not how people consume information nowadays anyways. People just watch YouTube videos where presenters don't face this kind of pressure. Or they read some text on social media from someone they like.
Anyways, we can't rely on these social tips anymore. And even if we could, they are not ideal, because they allow bullshitters to thrive, whereas modestly confident people end up ostracized.
I've been thinking more about that over the last hour or so, and I've come to the conclusion that different people have different priorities, and I don't think there's much we can do about that.
Whether it's nature, nurture, or experience, I strongly distrust people who claim to have THE answer to any complex problem, or who feel that it's better to bulldoze other people than to be wrong.
I'll listen to truth seekers, but ignore truth havers.
However, clearly that's not a universal opinion. Many people are happier believing in an authoritarian who has all the answers. And I don't think that will ever change.