Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's certainly a way to do this, poorly. But it's not realistic to expect an AI to be able to diagnose users with mental illnesses on the fly and not screw that up repeatedly (both with false positives, false negatives, and lots of other more bizarre failure modes that don't neatly fit into either of those categories).

I just think it's not a good idea to try to legally mandate that companies implement features that we literally don't have the technology to implement in a good way.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: