Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But then so are we? We are just predicting the next word we are saying, are we not? Even when you add thoughts behind it (sure some people think differently - be it without an inner monologue, or be it just in colors and sounds and shapes, etc), but that "reasoning" is still going into the act of coming up with the next word we are speaking/writing.


This type of response always irks me.

It shows that we, computer scientists, think of ourselves as experts on anything. Even though biological machines are well outside our expertise.

We should stop repeating things we don't understand.


We're not predicting the next word we're most likely to say, we're actively choosing the word that we believe most successfully conveys what we want to communicate. This relies on a theory of mind of those around us and an intentionality of speech that aren't even remotely the same as "guessing what we would say if only we said it"


When you talk at full speed, are you really picking the next word?

I feel that we pick the next thought to convey. I don't feel like we actively think about the words we're going to use to get there.

Though we are capable of doing that when we stop to slowly explain an idea.

I feel that llms are the thought to text without the free-flowing thought.

As in, an llm won't just start talking, it doesn't have that always on conscious element.

But this is all philosophical, me trying to explain my own existence.

I've always marveled at how the brain picks the next word without me actively thinking about each word.

It just appears.

For example, there are times when a word I never use and couldn't even give you the explicit definition of pops into my head and it is the right word for that sentence, but I have no active understanding of that word. It's exactly as if my brain knows that the thought I'm trying to convey requires this word from some probability analysis.

It's why I feel we learn so much from reading.

We are learning the words that we will later re-utter and how they relate to each other.

I also agree with most who feel there's still something missing for llms, like the character from wizard of Oz that is talking while saying if he only had a brain...

There is some of that going on with llms.

But it feels like a major piece of what makes our minds work.

Or, at least what makes communication from mind-to-mind work.

It's like computers can now share thoughts with humans though still lacking some form of thought themselves.

But the set of puzzle pieces missing from full-blown human intelligence seems to be a lot smaller today.


We are really only what we understand ourselves to be? We must have a pretty great understanding of that thing we can't explain then.


I wouldn’t trust a next word guesser to make any claim like you attempt, ergo we aren’t, and the moment we think we are, we aren’t.


Humans and LLMs are built differently, it seems disingenuous to think we both use the same methods to arrive at the same general conclusion. I can inherently understand some proofs of pythagorean's theorem but an LLM might apply different ones for various reasons. But the output/result is still the same. If a next token generator run in parallel can generate a performant relational database that doesn't directly imply I am also a next token generator.


Humans do far more than generate tokens.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: