Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What nonsense. I've spent over a decade 100% focused on AI, and the broad consensus among everyone I've worked with is not to be that concerned at all. The only consensus is that a small group of self proclaimed experts who make a lot of noise is that they get lots of press coverage if they scream and shout making predictions based on zero scientific evidence.

We can understand the physics of greenhouse gases and take measurements of earth systems to build evidence for models and theories. (Many of which are nonetheless very inaccurate beyond short time horizons.) Show me any evidence for AI risk today beyond people's theories and beliefs?

The best predictor of the future is the past, not people's wild ideas about what the future could be. I'm not about to sit here feeling scared because there is more uncertainty that our matrix multiplies are about to go rogue. There are no AGI experts or AI risk experts, because we don't have any of these systems to study and analyze. What we have is people forming beliefs about their own predictions about systems which are unknowable.



> Show me any evidence for AI risk today beyond people's theories and beliefs?

Deduction. Empirical evidence isn't the only source of insight. You don't have to conduct experiments in order to reasonably conclude that an entity that

1. outperforms humans at mental tasks

2. shares no evolutionary commonality with humans

3. does not necessarily have any goals that align with those of humans

is a potential threat to humans. This follows from very basic deductive analysis.

> There are no AGI experts or AI risk experts, because we don't have any of these systems to study and analyze.

Indeed. Which increases the risk. Unless you are claiming that AGI is actually impossible, the fact that its properties and behavior cannot be studied should make people even more worried.

Uncertainty and lack of knowledge are what risk is. How little we know about potential AGI is exactly why AGI represents such a big risk. If we completely understood it and were able to make reliable predictions, there would be zero risk by definition.


1) Computers, smart phones, and pocket calculators also outperform humans at mental tasks. So do birds, dolphins, and dogs for that matter, at tasks for which they are specialized.

2) so? What are you imagining this implies? An infinity of possibilities does not a reason make, unless you are talking about arbitrary religious beliefs.

3) Right, no goals, no will, no purpose. Just some matrix multiplies doing interesting things.

Deduction requires a premise which then leads to another premise or a conclusion due to accepted facts or reasons. I'm genuinely curious why you think any of these properties automatically implies danger?

The future is uncertain. The stock market, the economy, your health, your friendships and romances, are all unpredictable and uncertain. Uncertainty is not a reason to freak out, although it might encourage us to find ways to become adaptable, anti-fragile, and wise. I think AI will help us improve in these dimensions because it is already proving that it can with real evidence, not beliefs.


It seems a safe prediction if you extrapolate that AI will get generally smarter than humans.

It also seems a safe prediction, given past human behaviour that some humans will set some AI to do bad stuff.

Therefore risk.

(eg "chat gtp 27, help me make billions on crypto and use it to set up a distributed army to take over the world")


> chat gtp 27, help me make billions on crypto and use it to set up a distributed army to take over the world

Yevgeny Prigozhin has entered the chat.


the world is incredibly filled with risk to humans—people in the AI doomer camp are making a claim that AI potentially is a new kind of uncontrollable risk that warrants extraordinary regulation

the basis of this claim seems to be a confusion of logical or deductive reasoning with inductive or observational reasoning

argument comes down to

- it’s possible to imagine a super intelligent machine that has properties that will kill everyone (this is an exercise in logical reasoning)

- since it’s possible to imagine it, this means it will come into existence — this is an error because things that exist in the real, physical world do so based on physical processes governed by inductive reasoning

generally, there is a long series of steps between the imagining of some constructed, complex machine and its realization, along with its conceptual foundations it requires sustained effort, trial and error, maintenance, generally a serious fight against entropy to make it function and keep it functioning

the sort of out of control AI imagined by AI doomers is not something we’ve seen before

so we shouldn’t make costly decisions based upon this confusion of reasoning


> since it’s possible to imagine it, this means it will come into existence

Nope. That's not the argument. In fact, it's such a bad take that it reeks of a deliberately constructed strawman.

The actual argument is: Since it's possible to imagine it, and doesn't contradict any known laws of nature or technology, and current development appears to be iterating towards it, it might come into existence, thus it presents a statistical risk.

When I take out tornado insurance, it's not because I know my house will be blown away by a storm – it's because I don't know, but the possibility is there.

Certainty is not required in order to conclude that risk exists. Quite the opposite is true: Risk is a function of uncertainty.


The word “entity” is doing some quiet but heavy lifting here. I think it would be a good idea to specify what you really mean by this term, and how we can logically deduce the development of such a thing from existing technology (deep learning).


>I've spent over a decade 100% focused on AI, and the broad consensus among everyone I've worked with is not to be that concerned at all.

I also work in AI and I don't mention my concerns to colleagues who are so anti-AI risk as you. Perhaps your ideas about your colleagues' views are distorted.


IMO it is really just paranoid AI risk cultists theater for narcissists.

The more narcissistic types have figured out it is their moment in the sun to see their name in the paper and the more they play up the idea that AI is going to eat us the more attention they will get from media.

The whole idea is so irrational that I fail to see what other explanation there really is.

The other guilty party are the masses that have been trained to think in terms of appeal to authority instead of using their own brains. They have created the audience for this theater.


I don't think it's wise to just give this one the climate change treatment, that is not listening to the scientists and not taking action or taking it seriously until it's a catastrophe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: