Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Plausible-looking but incorrect sentences" is cheap, reflexive cynicism. LLMs are an incredible breakthrough by any reasonable standard. The reason to be optimistic about further progress is that we've seen a massive improvement in capabilities over the past few years and that seems highly likely to continue for the next few (at least). It's not going to scale forever, but it seems pretty clear that when the dust settles we'll have LLMs significantly more powerful than the current cutting edge -- which is already useful.

Is it going to scale to "superintelligence?" Is it going to be "the last invention?" I doubt it, but it's going to be a big deal. At the very least, comparable to google search, which changed how people interact with computers/the internet.



>when the dust settles we'll have LLMs significantly more powerful than the current cutting edge -- which is already useful.

LLMs, irrespective of how powerful, are all subject to the fundamental limitation that they don't know anything. The stochastic parrot analogy remains applicable and will never be solved because of the underlying principles inherent to LLMs.

LLMs are not the pathway to AGI.


I sometimes wonder if we’re just very advanced stochastic parrots.

Repeatedly, we’ve thought that humans and animals were different in kind, only to find that we’re actually just different in degree: elephants mourn their dead, dolphins have sex for pleasure, crows make tools (even tools out of multiple non-useful parts! [1]). That could be true here.

LLMs are impressive. Nobody knows whether they will or won’t lead to AGI (if we could even agree on a definition – there’s a lot of No True Scotsman in that conversation). My uneducated guess is that that you’re probably right: just continuing to scale LLMs without other advancements won’t get us there.

But I wish we were all more humble about this. There’s been a lot of interesting emergent behavior with these systems, and we just don’t know what will happen.

[1]: https://www.ox.ac.uk/news/2018-10-24-new-caledonian-crows-ca...


I swear I read this exact same thread in nearly every post about OpenAI on HN. It's getting to a point where it almost feels like it's all generated by LLMs


You mean the standard refrain of "we too are stochastic parrots"? Yes, that argument gets trotted out over and over.

LLM proponents seem unwilling to accept that we comprehend the words we speak/write in a way that LLMs are not capable of doing.


I was referring to the whole thread, so it includes the "LLMs are nothing but stochastic parrots" bit too.


> LLM proponents seem unwilling to accept that we comprehend the words we speak/write in a way that LLMs are not capable of doing.

Maybe their salary depends on them not understanding it.


Networks correspond to diagrams correspond to type theories — and LLMs learn such a theory and reason in that internal language (as in, topos theory).

That effective theory is knowledge, literally.

People harping about “stochastic parrot” are just people repeating a shallow meme — ironically, like a stochastic parrot.


In the scheme of things I'd say most people don't know shit. And that's perfectly fine because we can't reasonably expect the average person to know all the things.

LLM models are very far off from humans in reasoning ability, but acting like most of the things humans do aren't just riffing on or repeating previous data is wrong, imo. As I've said before, humans been the stochastic parrots all along.


Arguing over terminology like "AGI" and the verb "to know" is a waste of time. The question is what tools can be built from them and how can people use those tools.


Agreed.

I thought a forum of engineers would be more interested in the practical applications and possible future capabilities of LLMs, than in all these semantic arguments about whether something really is knowledge or really is art or really is perfect


I'm directly responding to a comment discussing the popular perception that we, as a society, are "steps away" from AGI. It sounds like you agree that we aren't anywhere close to AGI. If you want to discuss the potential for LLMs to disrupt the economy there's definitely space for that discussion but that isn't the comment I was making.


Whether we should call what LLMs do “knowing” isn’t really relevant to how far away we are from AGI, what matters is what they can actually do, and they can clearly do at least some things that show what we would call knowledge if a human did it, so I think this is just humans wanting to feel we’re special


>they can clearly do at least some things that show what we would call knowledge if a human did it

Hard disagree. LLMs merely present the illusion of knowledge to the casual observer. A trivial cross examination usually is sufficient to pull back the curtain.


Noam Chomsky and Doug Hofstader had the same opinion. Last I checked Doug has recanted his skepticism and is seriously afraid for the future of humanity. I’ll listen to him and my own gut than some random internet people still insisting this is all a nothing burger.


The thing is my gut is telling me this is a nothing burger, and I'll listen to my own gut before yours - a random internet person insisting this is going to change the world.

So what exactly is the usefulness of this discussion? You think "I'll trust my gut" is a useful argument in a debate?


Trusting your gut isn't a useful debate tactic, but it is a useful tool for everybody to use personally. Different people will come to different conclusions, and that's fine. Finding a universal consensus about future predictions will never happen, it's an unrealistic goal. The point of the discussion isn't to create a consensus; it's useful because listening to people with other opinions can shed light on some blind spots all of us have, even if we're pretty sure the other guys are wrong about all or most of what they're saying.

FWIW my gut happens to agree with yours.


I'm convinced that the "LLMs are useless" contingent on HN is just psychological displacement.

It hurts the pride of technical people that there's a revolution going on that they aren't involved in. Easier to just deny it or act like it's unimpressive.


Or it's technical people who have been around for a few of these revolutions, which revolved and revolved until they disappeared into nothing but a lot of burned VC, to recognise the pattern? That's where I'd place my own cynicism. My bullshit radar has proven to be pretty reliable over the past few decades in this industry, and it's been blaring on highest levels for a while about this.


Deep learning has already proven its worth. Google translate is an example on the older side. As LLMs go, I can take a picture of a tree or insect and upload it and have an LLM identify it in seconds. I can paste a function that doesn't work into an LLM and it will usually identify the problems. These are truly remarkable steps forward.

How can I account for the cynicism that's so common on HN? It's got to be a psychological mechanism.


> "Plausible-looking but incorrect sentences" is cheap, reflexive cynicism. LLMs are an incredible breakthrough by any reasonable standard

No it isn't. The previous state of the art was markov chain level random gibberish generation. What OP described is an enormous step up from that.


> and that seems highly likely to continue for the next few (at least)

Why? Text training data is already exhausted.


Yes, turns out in the context of machines, all of the names we've given to things and concepts is not a very large set in the scheme of things.

Next focus will hopefully be on reasoning abilities. Probably gonna take another decade and a similar paper to attention is all you need before we see any major improvements...but then again all eyes are on these models atm so perhaps it'll be sooner than that.


>"Plausible-looking but incorrect sentences" is cheap, reflexive cynicism.

Literally today I used Bing and it was making up API parameters.

Code example looked fine, but didnt reflect reality.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: