Sure, you can view an LLM as a lossy compression of its dataset. But people who make the comparison are either trying to imply a fundamental deficiency, a performance ceiling, or trying to link it to information theory. And frankly, I don't see a lot of those "hardcore information theory in application to modern ML" discussions around.
The "fundamental deficiency/performance ceiling" argument I don't buy at all.
We already know that LLMs use high level abstractions to process data - very much unlike traditional compression algorithms. And we already know how to use tricks like RL to teach a model tricks that its dataset doesn't - which is where an awful lot of recent performance improvements is coming from.
And if you get that "sometimes" down to "rarely" and then "very rarely" you can replace a lot of expensive and inflexible humans with cheap and infinitely flexible computers.
That's pretty much what we're experiencing currently. Two years ago code generation by LLMs was usually horrible. Now it's generally pretty good.
I think humans who think they can't be replaced by a next token predictor think too highly of themselves.
LLMs show it plain and clear: there's no magic in human intelligence. Abstract thinking is nothing but fancy computation. It can be implemented in math and executed on a GPU.
"What's actually happening" is all your life you've been told that human intelligence is magical and special and unique. And now it turns out that it isn't. Cue the coping.
We've already found that LLMs implement the very same type of abstract thinking as humans do. Even with mechanistic interpretability being in the gutters, you can probe LLMs and find some of the concepts they think in.
But, of course, denying that is much less uncomfortable than the alternative. Another one falls victim to AI effect.
> "What's actually happening" is all your life you've been told that human intelligence is magical and special and unique. And now it turns out that it isn't. Cue the coping.
People have been arguing this is not the case for at least hundreds of years.
I as a human being can of course not be replaced by a next token predictor.
But I as a chess player can easily be replaced by a chess engine and I as a programmer might soon be replaceable by a next token predictor.
The only reason programmers think they can't be replaced by a next token predictor is that programmers don't work that way. But chess players don't work like a chess engine either.
Hallucination has significantly decreased in the last two years.
I'm not saying that LLMs will positively replace all programmers next year, I'm saying that there is a lot of uncertainty and that I don't want that uncertainty in my career.
Sure, you can view an LLM as a lossy compression of its dataset. But people who make the comparison are either trying to imply a fundamental deficiency, a performance ceiling, or trying to link it to information theory. And frankly, I don't see a lot of those "hardcore information theory in application to modern ML" discussions around.
The "fundamental deficiency/performance ceiling" argument I don't buy at all.
We already know that LLMs use high level abstractions to process data - very much unlike traditional compression algorithms. And we already know how to use tricks like RL to teach a model tricks that its dataset doesn't - which is where an awful lot of recent performance improvements is coming from.