Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In addition, we shouldn't draw false equivalence between not knowing how brains work and not knowing how LLMs work, and concluding they must be similar.


Also, it's always funny to notice how the brain, throughout history, has always been compared to the latest technology available. For a long time people said "the brain must be similar to a clock".


Wouldn’t you say that this is how evolution and the spread of information works, in a system that has the spark of some life chemistry?

Of course… The hole in that theory is that evolution never found the wheel.

The steel man in that theory is that it invented the neurological and social processes that then went on to invent the wheel. And the platypus and the clap.

Edit: I forgot to bring it back around and make a point ;)

I’m saying that humans invented clocks and CPUs. We only have metaphors that have emerged from the still misunderstood ether of the informatic universe.


Well, we're observing similarities between them, but people insist they are 100% different in their way of working.


there are currently two things we are aware of in the universe that can reason abstractly. i don't think this is a coincidence.


Only in a very anthropocentric sense. How do we know an ant colony doesn’t reason abstractly (or a human town for that matter)? What about slime mold or ameba? Both can solve a maze as well as humans. What makes you think a forest ecosystem isn’t capable of abstract though?

It is only if we narrow thought to mean precisely human-like thought when humans and human creations are uniquely capable of something. To that extent, our models of intelligence is very much in the pre-copernican era.


> Only in a very anthropocentric sense.

yes, that is the sense in which we are discussing intelligence in order to debate whether the human brain and LLMs operate on similar phenomena


The fact that both humans and LLMs can both reason abstractly is an uninteresting fact if we define “abstract reasoning” to be exactly what humans do, and then create models with the goal of recreating exactly that. This is than simply a statement of an accurate model, and the word intelligence is there only to confuse.

This would be like finding a flower which produces a unique fragrance, then create a perfume which approaches the same fragrance and then conclude that since these are the only two things in the universe which can create this fragrance there must be something special about that perfume.


i would define abstract reasoning as composing and manipulating a model of reality or other complex system in order to make predictions

> is an uninteresting fact if we define “abstract reasoning” to be exactly what humans do, and then create models with the goal of recreating exactly that

if you find this uninteresting, we have perhaps an irreconcilably differing view of things


Your definition excludes language models, as they are in and of them selves just a model which interpolates from data (i.e. makes predictions). But your definition also includes lots of other systems, most mammalian brains construct some kind of models of reality in order to make predictions. And we have no idea whether other systems (such as fungal networks or ant colonies) do that.

I’m not saying these language models—or my hypothetical perfume—aren’t an amazing feat of technology, however neither has any deep philosophical implications about shared properties other than the ones constructed to do so. Meaning, even if LLMs and humans are the only two things in the universe that can reason abstractly in the same way humans do, that doesn’t mean the latter has any more properties shared with the former.


Isn't reasoning abstractly a spectrum though? I took the parent to mean "reason abstractly to the same degree humans do".

Slime molds and amebas might be able to reason abstractly to some degree but they can't write code or poetry.


Like I said, only in an anthropocentric sense.

How do you know fungal networks don’t write and read poetry under the forest floor? If they do—and we have no reason to doubt that they do—you wouldn’t be able to read them, let alone understand them.

The earth’s biosphere as a whole also writes code, just in DNA as opposed to on silicon transistors, why exclude the earth’s biosphere from things capable of abstract thought?


More than 2.

For an example, crows can effectively use tools and communicate abstract concepts to one another from a memory. Which means they can observe a situation, draw conclusions, and use those conclusions to act as well as make decisions on how to act. That would seem to meet the bar for reasoning abstractly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: