Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that the most likely way an AI would take control involves social/political engineering, but that doesn't mean it will have human-like morals making it keep humanity alive once it doesn't need us or that it will have human-like limits.

>And the big infinite dollar question is could this hypothetical AI improve on itself by transcending human limits? Let's say by directly writing programs that it has conscious control over? Can it truly "watch" a 1000 video streams in real-time?

Even if its mind wasn't truly directly scalable, it could make 1000 short or long-lived copies of itself to delegate those tasks to.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: