Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Absolutely! Hence my caveat. Other scenarios where I'd be worried about hallucinations are in algorithms that control self-driving vehicles, or in medical-image analysis. I think accurately quantifying uncertainty for these problems (so that an algorithm, rather than just hallucinate, might say "I don't know") is an important and currently active research topic.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: