I am taking a contrary stance and not even as a contrarian voice, but based on basic patterns over the past few centuries. The whole article is geared towards a specific audience. The interesting thing is that it is not exactly wrong, but the way it presents facts is intended for a specific type of consumption: in this case -- generating anti AI sentiment.
<< Fundamental skills like mental arithmetic, memorising text, or reading a map could soon be obsolete as cognitive offloading becomes a normal way of working.
Calculator, books, gps -- the three have been trotted out each time and some ( what passed for books in ancient days ) decried by otherwise smart people, who simply could not fanthom a different way of solving an issue. Worse, they offered no reason for:
1. Why do I need to calculate everything in my head?
2. Why do I need to memorize every passage?
3. Why do I need to remember every step?
So kids, who saw an improvement simply ignored the old men.. and good thing too. Otherwise, I might not even have been able to read beowulf ( literally ).
<< it’s also the desire among people in positions of authority and influence
Is it? Recent news suggested that execs of various tech corps limit their kids passive screen time ( so no doom scrolling, no social media ).
<< able to retain concentration so that we can learn and distinguish between what is real and what is AI slop
True, but in a sense that has always been true. If so, what is the real reason for this 'collection of words'?
<< The danger here is the separation of process from “product”. In the eyes of the utilitarian tech-evangelist, the essay is simply a product, a sequence of words to be generated as quickly as possible.
And here is the issue. Author is concerned that their words are no longer going to be special; note, not completely unlike certain monks upon learning about printing press. How quaint.
<< But the process of writing is itself constitutive of understanding.Writing is thinking. It is the act of retrieving knowledge, wrestling with syntax, and organising logic that forges understanding.
Have you read some of the articles out there ( including this one )? There is no wrestling there. There might ( I am being charitable ) be some thinking, but if there is logic OR understanding, it is not beyond what is required for serving the owner of the writer. That is all there is to it.
<< When AI produces the final text, the student is the ventriloquist’s dummy, mouthing words that originated elsewhere.
Well, I will be darned. This individual is just taking words out of my mouth, because I was about to say all those talking ( sorry, writing ) heads are just parroting one another with the origin of the sound ( sorry again, word ) clearly not coming from them..
<< They possess the answer but lack the understanding of how it was derived
So.. we ban encyclopedias?
<< We are also witnessing a kind of cognitive laziness which some of our institutions are actively encouraging.
I can give him that. It does take effort not to rely on it.
<< It requires the uncomfortable sensation of not knowing
But... but.. the author knows.. he just told us all what to think...
<< float on a sea of algorithmic slop they have neither the will nor the wit to navigate.
And this is different from now how exactly? Scale? Kids who want to read will read. Kids who want to learn, will learn.
***
Honestly, I am hard pressed not to say this article is slop. Not even proper AI slop like we would expect today ( edit: because at least that is entertaining ). This is lazy human slop. High and mighty, but based on 'old man yells at the cloud' vibes.
I was thinking a little about it lately. Not the saying itself, but the positioning to the general public. The annoying reality is that for most of the things that I consider important enough to voice discontent over ( and maybe even suppress need for convenience for ) are not always easy to 'present'. Note that it is not even always easy here either, but we do, by design, give one another a charitable read.
Hell, look at me, I care and I accepted some of it as price to pay for house peace.
Huh? No? It means that the overall platform is already at 'good enough' level. There can always be an improvement, but in terms of pure visuals, we are already past at a point, where some studios choose simple representations ( see some 2d platformers ) as a stylistic choice.
It is not a question of want. Gaming will exist in some form so I am simply uncertain what you are concerned about.
Can you elaborate a little? What, exactly, is your concern here? That you won't have nvidia as a choice? That AMD will be the only game in town? That gpu market will move from duopoly ( for gaming specifically ) to monopoly? I have little to go on, but I don't really want to put words in your mouth based on minimal post.
Not a locked ecosystem console or a streaming service with lag!
I think if nvidia leaves the market for AI, why wouldn’t AMD and intel, with the memory cartel. So DIY market is gone. That kills lots of companies and creators that rely on the gaming market.
It’s a doom spiral for a lot of the industry. If gaming is just PlayStation and switch and iGPUs there is a lot less innovation in pushing graphics.
There was no DIY market on 8 and 16 bit home computers with fixed hardware, yet bedroom coding (aka indies) not only did thrive, they were the genesis of many AAA publishers, and to this day those restrictions keep the Demoscene alive and recognised as World culture heritage.
PC was largely ignored for gaming, until finally EGA/VGA card, alongside AdLib/Soundblaster, became widespread in enough households to warrant development costs.
Interesting. Does nvidia offer control? Last time I checked they arbitrarily updated their drivers to degrade unwelcome use case ( in that case, for crypto ). It sounds to me like the opposite of that.
Separately, do you think they won't try to ingratiate themselves to gamers again once AI market changes?
Do you not think they are part of the cartel anyway ( and the DIY market exists despite that )?
<< So DIY market is gone.
How? One use case is gone. Granted, not a small one and one with an odd type of.. fervor, but relatively small nonetheless. At best, DIY market shifts to local inference machines and whatnot. Unless you specifically refer to gaming market..
<< That kills lots of companies and creators that rely on the gaming market.
Markets change all the time. EA is king of the mountain. EA is filing for bankruptcy. Circle of life.
Edit: ALso, upon some additional consideration and in the spirit of christmas, fuck the streamers ( aka creators ). With very, very limited exceptions, they actively drive what is mostly wrong with gaming these days. Fuck em. And that is before we get to the general retardation they contribute to.
<< It’s a doom spiral for a lot of the industry.
How? For AAA? Good. Fuck em. We have been here before and were all better for it.
<< If gaming is just PlayStation and switch and iGPUs there is a lot less innovation in pushing graphics.
Am I reading it right? AMD and Intel is just for consoles?
<< It will kill the hobby.
It is an assertion without any evidence OR a logical cause and effect.
Not the person you're responding to, but I think there's a non trivial argument to make that our thoughts are just auto complete. What is the next most likely word based on what you're seeing. Ever watched a movie and guessed the plot? Or read a comment and know where it was going to go by the end?
And I know not everyone thinks in a literal stream of words all the time (I do) but I would argue that those people's brains are just using a different "token"
There's no evidence for it, nor any explanation for why it should be the case from a biological perspective. Tokens are an artifact of computer science that have no reason to exist inside humans. Human minds don't need a discrete dictionary of reality in order to model it.
Prior to LLMs, there was never any suggestion that thoughts work like autocomplete, but now people are working backwards from that conclusion based on metaphorical parallels.
There actually was quite a lot of suggestion that thoughts work like autocomplete. A lot of it was just considered niche, e.g. because the mathematical formalisms were beyond what most psychologist or even cognitive scientists would deem usefull.
Predictive coding theory was formalized back around 2010 and traces it roots up to theories by Helmholtz from 1860.
Predictive coding theory postulates that our brains are just very strong prediction machines, with multiple layers of predictive machinery, each predicting the next.
There are so many theories regarding human cognition that you can certainly find something that is close to "autocomplete". A Hopfield network, for example.
Roots of predictive coding theory extend back to 1860s.
Natalia Bekhtereva was writing about compact concept representations in the brain akin to tokens.
> There are so many theories regarding human cognition that you can certainly find something that is close to "autocomplete"
Yes, you can draw interesting parallels between anything when you're motivated to do so. My point is that this isn't parsimonious reasoning, it's working backwards from a conclusion and searching for every opportunity to fit the available evidence into a narrative that supports it.
> Roots of predictive coding theory extend back to 1860s.
This is just another example of metaphorical parallels overstating meaningful connections. Just because next-token-prediction and predictive coding have the word "predict" in common doesn't mean the two are at all related in any practical sense.
You, and OP, are taking an analogy way too far. Yes, humans have the mental capability to predict words similar to autocomplete, but obviously this is just one out of a myriad of mental capabilities typical humans have, which work regardless of text. You can predict where a ball will go if you throw it, you can reason about gravity, and so much more. It’s not just apples to oranges, not even apples to boats, it’s apples to intersubjective realities.
I don't think I am. To be honest, as ideas goes and I swirl it around that empty head of mine, this one ain't half bad given how much immediate resistance it generates.
Other posters already noted other reasons for it, but I will note that you are saying 'similar to autocomplete, but obviously' suggesting you recognize the shape and immediately dismissing it as not the same, because the shape you know in humans is much more evolved and co do more things. Ngl man, as arguments go, it sounds to me like supercharged autocomplete that was allowed to develop over a number of years.
Fair enough. To someone with a background in biology, it sounds like an argument made by a software engineer with no actual knowledge of cognition, psychology, biology, or any related field, jumping to misled conclusions driven only by shallow insights and their own experience in computer science.
Or in other words, this thread sure attracts a lot of armchair experts.
> with no actual knowledge of cognition, psychology, biology
... but we also need to be careful with that assertion, because humans do not understand cognition, psychology, or biology very well.
Biology is the furthest developed, but it turns out to be like physics -- superficially and usefully modelable, but fundamental mysteries remain. We have no idea how complete our models are, but they work pretty well in our standard context.
If computer engineering is downstream from physics, and cognition is downstream from biology ... well, I just don't know how certain we can be about much of anything.
> this thread sure attracts a lot of armchair experts.
"So we beat on, boats against the current, borne back ceaselessly into our priors..."
Look up predictive coding theory. According to that theory, what our brain does is in fact just autocomplete.
However, what it is doing is layered autocomplete on itself. I.e. one part is trying to predict what the other part will be producing and training itself on this kind of prediction.
What emerges from this layered level of autocompletes is what we call thought.
<< You are ready to fine-tune: You need the consistent, deterministic behavior that comes from fine-tuning on specific data, rather than the variability of zero-shot prompting.
<< You prioritize local-first deployment: Your application requires near-instant latency and total data privacy, running efficiently within the compute and battery limits of edge devices.
Thank you. I felt that was a very under appreciated direction ( most of the spotlight seemed to be on 'biggest' models ).
I'm with you! Small generative models are awesome, I thought so a decade ago and I still think so now! The size of what is "small" has definitely increased though, I used to think a 100 parameter model was large back in 2016, but here I am now saying 270 million is small :)
So far.. my track record has not been great, but maybe I was being a little too optimistic as I shuffle various futures in my head so lets start from more likely to more fun:
- Unabated push towards 'Snow Crash' level of extremely localized power structures at the expense of federal government ( think K shaped economy, but for governmental structures )
- Actual further descent into K shaped economy -- that.. I fear.. is a very safe prediction to make now
- Midterms will see some localized polically motivated violence ( likely across the spectrum bar some pressura valve release )
- Shadow wars will continue
- Bitcoin will crash; monero will replace it as dollar falls
- Companies and government will desperately work together to contains severely distributed ASI level entity that exists as hidden braille invisible characters across all known fora
- I manage to to move to full WFH
- Valve releases HL3 on Frame
- Fusion power will get closer by two kiloseconds
Honestly, I think it ran out of speculation runway and current 'crashing' is just poeple cashing out the profits they made on the recent swings. It doesn't take away my 2026 prediction tho:P
<< Fundamental skills like mental arithmetic, memorising text, or reading a map could soon be obsolete as cognitive offloading becomes a normal way of working.
Calculator, books, gps -- the three have been trotted out each time and some ( what passed for books in ancient days ) decried by otherwise smart people, who simply could not fanthom a different way of solving an issue. Worse, they offered no reason for:
1. Why do I need to calculate everything in my head? 2. Why do I need to memorize every passage? 3. Why do I need to remember every step?
So kids, who saw an improvement simply ignored the old men.. and good thing too. Otherwise, I might not even have been able to read beowulf ( literally ).
<< it’s also the desire among people in positions of authority and influence
Is it? Recent news suggested that execs of various tech corps limit their kids passive screen time ( so no doom scrolling, no social media ).
<< able to retain concentration so that we can learn and distinguish between what is real and what is AI slop
True, but in a sense that has always been true. If so, what is the real reason for this 'collection of words'?
<< The danger here is the separation of process from “product”. In the eyes of the utilitarian tech-evangelist, the essay is simply a product, a sequence of words to be generated as quickly as possible.
And here is the issue. Author is concerned that their words are no longer going to be special; note, not completely unlike certain monks upon learning about printing press. How quaint.
<< But the process of writing is itself constitutive of understanding.Writing is thinking. It is the act of retrieving knowledge, wrestling with syntax, and organising logic that forges understanding.
Have you read some of the articles out there ( including this one )? There is no wrestling there. There might ( I am being charitable ) be some thinking, but if there is logic OR understanding, it is not beyond what is required for serving the owner of the writer. That is all there is to it.
<< When AI produces the final text, the student is the ventriloquist’s dummy, mouthing words that originated elsewhere.
Well, I will be darned. This individual is just taking words out of my mouth, because I was about to say all those talking ( sorry, writing ) heads are just parroting one another with the origin of the sound ( sorry again, word ) clearly not coming from them..
<< They possess the answer but lack the understanding of how it was derived
So.. we ban encyclopedias?
<< We are also witnessing a kind of cognitive laziness which some of our institutions are actively encouraging.
I can give him that. It does take effort not to rely on it.
<< It requires the uncomfortable sensation of not knowing
But... but.. the author knows.. he just told us all what to think...
<< float on a sea of algorithmic slop they have neither the will nor the wit to navigate.
And this is different from now how exactly? Scale? Kids who want to read will read. Kids who want to learn, will learn.
***
Honestly, I am hard pressed not to say this article is slop. Not even proper AI slop like we would expect today ( edit: because at least that is entertaining ). This is lazy human slop. High and mighty, but based on 'old man yells at the cloud' vibes.
reply