> Show me any evidence for AI risk today beyond people's theories and beliefs?
Deduction. Empirical evidence isn't the only source of insight. You don't have to conduct experiments in order to reasonably conclude that an entity that
1. outperforms humans at mental tasks
2. shares no evolutionary commonality with humans
3. does not necessarily have any goals that align with those of humans
is a potential threat to humans. This follows from very basic deductive analysis.
> There are no AGI experts or AI risk experts, because we don't have any of these systems to study and analyze.
Indeed. Which increases the risk. Unless you are claiming that AGI is actually impossible, the fact that its properties and behavior cannot be studied should make people even more worried.
Uncertainty and lack of knowledge are what risk is. How little we know about potential AGI is exactly why AGI represents such a big risk. If we completely understood it and were able to make reliable predictions, there would be zero risk by definition.
1) Computers, smart phones, and pocket calculators also outperform humans at mental tasks. So do birds, dolphins, and dogs for that matter, at tasks for which they are specialized.
2) so? What are you imagining this implies? An infinity of possibilities does not a reason make, unless you are talking about arbitrary religious beliefs.
3) Right, no goals, no will, no purpose. Just some matrix multiplies doing interesting things.
Deduction requires a premise which then leads to another premise or a conclusion due to accepted facts or reasons. I'm genuinely curious why you think any of these properties automatically implies danger?
The future is uncertain. The stock market, the economy, your health, your friendships and romances, are all unpredictable and uncertain. Uncertainty is not a reason to freak out, although it might encourage us to find ways to become adaptable, anti-fragile, and wise. I think AI will help us improve in these dimensions because it is already proving that it can with real evidence, not beliefs.
the world is incredibly filled with risk to humans—people in the AI doomer camp are making a claim that AI potentially is a new kind of uncontrollable risk that warrants extraordinary regulation
the basis of this claim seems to be a confusion of logical or deductive reasoning with inductive or observational reasoning
argument comes down to
- it’s possible to imagine a super intelligent machine that has properties that will kill everyone (this is an exercise in logical reasoning)
- since it’s possible to imagine it, this means it will come into existence — this is an error because things that exist in the real, physical world do so based on physical processes governed by inductive reasoning
generally, there is a long series of steps between the imagining of some constructed, complex machine and its realization, along with its conceptual foundations it requires sustained effort, trial and error, maintenance, generally a serious fight against entropy to make it function and keep it functioning
the sort of out of control AI imagined by AI doomers is not something we’ve seen before
so we shouldn’t make costly decisions based upon this confusion of reasoning
> since it’s possible to imagine it, this means it will come into existence
Nope. That's not the argument. In fact, it's such a bad take that it reeks of a deliberately constructed strawman.
The actual argument is: Since it's possible to imagine it, and doesn't contradict any known laws of nature or technology, and current development appears to be iterating towards it, it might come into existence, thus it presents a statistical risk.
When I take out tornado insurance, it's not because I know my house will be blown away by a storm – it's because I don't know, but the possibility is there.
Certainty is not required in order to conclude that risk exists. Quite the opposite is true: Risk is a function of uncertainty.
The word “entity” is doing some quiet but heavy lifting here. I think it would be a good idea to specify what you really mean by this term, and how we can logically deduce the development of such a thing from existing technology (deep learning).
Deduction. Empirical evidence isn't the only source of insight. You don't have to conduct experiments in order to reasonably conclude that an entity that
1. outperforms humans at mental tasks
2. shares no evolutionary commonality with humans
3. does not necessarily have any goals that align with those of humans
is a potential threat to humans. This follows from very basic deductive analysis.
> There are no AGI experts or AI risk experts, because we don't have any of these systems to study and analyze.
Indeed. Which increases the risk. Unless you are claiming that AGI is actually impossible, the fact that its properties and behavior cannot be studied should make people even more worried.
Uncertainty and lack of knowledge are what risk is. How little we know about potential AGI is exactly why AGI represents such a big risk. If we completely understood it and were able to make reliable predictions, there would be zero risk by definition.