Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Asteroids, nuclear war, climate change, and AI. As the old song says "One of these things is not like the other". We know that asteroids exist and many large ones are uncomfortably close to our home. We know that nuclear weapons exist and have seen them used. We know that the climate is getting hotter (*).

AI is...well, people losing their shit over ChatGPT (**) aside, AI is not going to be real enough to worry about for a few more decades at least.

(*) Anyone who's about to regurgitate some fossil-fuel industry talking points in response, just save your breath.

(**) Above note goes double for this.



Would you rather we wait until AIs are actually posing a threat for us to study ways to align them with human values? Tons of money already goes into fighting climate change and basically everyone on earth is aware of the threat it poses. The AI safety field is only about a decade old and is relatively unknown. Of course it makes sense to raise awareness there


> Would you rather we wait until AIs are actually posing a threat for us to study ways to align them with human values?

No, I would rather we wait until we are close enough to AI that we are talking about something concrete, rather than making wild speculations.


Interesting. In your opinion, how should we decide when we're close enough that we're talking about something concrete?


Please see second asterisked point above.


You seem to agree that if at some point in the next few decades AI will be something we need to worry about, so I'm trying to figure out exactly what it is you oppose.

Would you have opposed research into renewable energy in the 1970s since global warming was still a few decades away from being something we needed to worry about?


> You seem to agree that if at some point in the next few decades AI will be something we need to worry about

If you think this than you have misunderstood me. Now go elsewhere and bother other people.


I'm not trying to bother you. I'm genuinely curious about what makes people oppose AI safety research.

What makes you sure it's not going to be a problem?


> I'm not trying to bother you.

Your intent is irrelevant.


Why are you bothered by a simple line of follow up questions about a belief you expressed publicly?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: