Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not worried about self-acting AIs. Or what would be considered sentient, something acting without prompt or request. We are far away from that.

What I could worry about is miss-use and failure to ensure the output in so many use cases. And maybe too many developing blind trust in AI and then not thinking and critically verifying output. Not so big thing for media, images, video and so on. But actually using AI generated content for let's say some control system. Maybe for self-driving or in factory. Not that world will be taken over, but that people will be killed.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: