Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> AGI is inevitable because computation is universal and intelligence is substrate independent.

What the article is asking, albeit obliquely is: how do we know that to be true?

It is very difficult to prove when definitions for intelligence and consciousness are fuzzy and not widely agreed upon.



I see people assert this all the time. Intelligence is the ability to achieve goals. Consciousness is contents of what you are aware of.

The definitions are irrelevant. People want a definition to do demarcation. But demarcation is boring. I don't care whether some threshold entity is on this or that side of the line. The existence of some distinct territories is enough for me.

There is no reason to imagine AIs are prohibited from occupying territory on both sides of the line.

AIs aren't climbing up a ladder, non local exploration is possible


> Intelligence is the ability to achieve goals

A more precise definition by François Chollet: The intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty

https://arxiv.org/pdf/1911.01547.pdf

So a system is more intelligent if it can solve harder tasks with fewer trials and prior knowledge. Intelligence is always defined over a scope of tasks, for example human intelligence only applies to the space of tasks and domains that fit within the human experience. Our intelligence does not have extreme generalization even though we have broad generalization (defined as adaptation to unknown unknowns across a broad category of related tasks).


I read it more as asking about deep-NN models specifically.

If it really was trying to suggest that computation isn't universal or that our intelligence is non-physical or something, that would be a whole different problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: