When you put it that way it reminds me of the Severn/Keats character in the Hyperion Cantos. Far-future AIs reconstruct historical figures from their writings in an attempt to gain philosophical insights.
The Hyperion Cantos is such an incredible work of fiction. Currently re-reading and am midway through the fourth book The Rise Of Endymion; this series captivates my imagination and would often find myself idly reflecting on it and the characters within more than a decade after reading. Like all works, it has its shortcomings, but I can give no higher recommendation than the first two books.
I really should re-read the series. I enjoyed it when I read it back in 2000 but it's a faded memory now.
Without saying anything specific to spoil plot poonts, I will say that I ended-up having a kidney stone while I was reading the last two books of the series. It was fucking eerie.
Almost no scifi has predicted world changing "qualitative" changes.
As an example, portable phones have been predicted. Portable smartphones that are more like chat and payment terminals with a voice function no one uses any more ... not so much.
The Machine Stops (https://www.cs.ucdavis.edu/~koehl/Teaching/ECS188/PDF_files/...), a 1909 short story, predicted Zoom fatigue, notification fatigue, the isolating effect of widespread digital communication, atrophying of real-world skills as people become dependent on technology, blind acceptance of whatever the computer says, online lectures and remote learning, useless automated customer support systems, and overconsumption of digital media in place of more difficult but more fulfilling real life experiences.
It's the most prescient thing I've ever read, and it's pretty short and a genuinely good story, I recommend everyone read it.
Edit: Just skimmed it again and realized there's an LLM-like prediction as well. Access to the Earth's surface is banned and some people complain, until "even the lecturers acquiesced when they found that a lecture on the sea was none the less stimulating when compiled out of other lectures that had already been delivered on the same subject."
There is even more to it than that. Also remember this is 1909. I think this classifies as a deeply mysterious story. It's almost inconceivable for that time period.
-people a depicted as grey aliens (no teeth, large eyes, no hair). Lesson the Greys are a future version of us.
The air is poisoned and ruined cities. People live in underground bunkers...1909...nuclear war was unimaginable then. This was still the age of steam ships and coal power trains. Even respirators would have been low on the public imagination.
The air ships with metal blinds sound more like UFOs than blimps.
The white worms.
People are the blood cells of the machine which runs on their thoughts social media data harvesting of ai.
China invaded Australia. This story was 8 years or so after the Boxer Rebellion so that would have sounded like say Iraq invading the USA in the context of its time.
The story suggests this is a cyclical process of a bifurcated human race.
The blimp crashing into the steel evokes 9/11, 91+1 years later...
Zamatyin’s We was prescient politically, socially and technologically - but didn’t fall into the trap of everyone being machine men with antennae.
It’s interesting - Forster wrote like the Huxley of his day, Zamyatin like the Orwell - but both felt they were carrying Wells’ baton - and they were, just from differing perspectives.
In other words, sometimes, things happen in reality that, if you were to read it in a fictional story or see in a movie, you would think they were major plot holes.
Kindles are just books and books are already mostly fairly compact and inexpensive long-form entertainment and information.
They're convenient but if they went away tomorrow, my life wouldn't really change in any material way. That's not really the case with smartphones much less the internet more broadly.
Funny, I had "The collected stories of Frank Herbert" as my next read on my tablet. Here's a juicy quote from like the third screen of the first story:
"The bedside newstape offered a long selection of stories [...]. He punched code letters for eight items, flipped the machine to audio and listened to the news while dressing."
Anything qualitative there? Or all of it quantitative?
Story is "Operation Syndrome", first published in 1954.
To the proud contrarian, "the empire did nothing wrong". Maybe Sci-fi has actually played a role in the "memetic desire" of some of the titans of tech who are trying to bring about these worlds more-or-less intentionally. I guess it's not as much of a dystopia if you're on top and its not evil if you think of it as inevitable anyway.
I don't know. Walking on everybody's face to climb a human pyramid, one don't make much sincere friends. And one certainly are rightfully going down a spiral of paranoia. There are so many people already on fast track to hate anyone else, if they have social consensus that indeed someone is a freaking bastard which only deserve to die, that's a lot of stress to cope with.
Future is inevitable, but only ignorants of self predictive ability are thinking that what's going to populate future is inevitable.
It goes a bit deeper than that since they got funding in the wake of 9/11 and the requests for intelligence and investigative branches of government to do better and coalescing their information to prevent attacks.
So "panopticon that if it had been used properly, would have prevented the destruction of two towers" while ignoring the obvious "are we the baddies?"
To be honest, while I'd heard of it over a decade ago and I've read LOTR and I've been paying attention to privacy longer than most, I didn't ever really look into what it did until I started hearing more about it in the past year or two.
But yeah lots of people don't really buy into the idea of their small contribution to a large problem being a problem.
>But yeah lots of people don't really buy into the idea of their small contribution to a large problem being a problem.
As an abstract idea I think there is a reasonable argument to be made that the size of any contribution to a problem should be measured as a relative proportion of total influence.
The carbon footprint is a good example, if each individual focuses on reducing their small individual contribution then they could neglect systemic changes that would reduce everyone's contribution to a greater extent.
Any scientist working on a method to remove a problem shouldn't abstain from contributing to the problem while they work.
Or to put it as a catchy phrase. Someone working on a cleaner light source shouldn't have to work in the dark.
>As an abstract idea I think there is a reasonable argument to be made that the size of any contribution to a problem should be measured as a relative proportion of total influence.
Right, I think you have responsibility for your 1/<global population>th (arguably considerably more though, for first-worlders) of the problem. What I see is something like refusal to consider swapping out a two-stroke-engine-powered tungsten lightbulb with an LED of equivalent brightness, CRI, and color temperature, because it won't unilaterally solve the problem.
Stock buying as a political or ethical statement is not much of a thing. For one the stocks will still be bought by persons with less strung opinions, and secondly it does not lend itself well to virtue signaling.
Well, two things lead to unsophisticated risk-taking, right... economic malaise, and unlimited surplus. Both conditions are easy to spot in today's world.
Still can't believe people buy their stock, given that they are the closest thing to a James Bond villain, just because it goes up.
I've been tempted to. "Everything will be terrible if these guys succeed, but at least I'll be rich. If they fail I'll lose money, but since that's the outcome I prefer anyway, the loss won't bother me."
Trouble is, that ship has arguably already sailed. No matter how rapidly things go to hell, it will take many years before PLTR is profitable enough to justify its half-trillion dollar market cap.
Saw a joke about grok being a stand-in for Elon's children and had the realization he's the kind of father who would lobotomie and brainwipe his progeny for back-talk. Good thing he can only do that to their virtual stand-in and not some biological clones!
Zero percent chance this is anything other than laughably bad. The fact that they're trotting it out in front of the press like a double spaced book report only reinforces this theory. It's a transparent attempt by someone at the CIA to be able to say they're using AI in a meeting with their bosses.
Let me take the opposing position about a program to wire LLMs into their already-advanced sensory database.
I assume the CIA is lying about simulating world leaders. These are narcissistic personalities and it’s jarring to hear that they can be replaced, either by a body double or an indistinguishable chatbot. Also, it’s still cheaper to have humans do this.
More likely, the CIA is modeling its own experts. Not as useful a press release and not as impressive to the fractious executive branch. But consider having downtime as a CIA expert on submarine cables. You might be predicting what kind of available data is capable of predicting the cause and/or effect of cuts. Ten years ago, an ensemble of such models was state of the art, but its sensory libraries were based on maybe traceroute and marine shipping. With an LLM, you can generate a whole lot of training data that an expert can refine during his/her downtime. Maybe there’s a potent new data source that an expensive operation could unlock. That ensemble of ML models from ten years ago can still be refined.
And then there’s modeling things that don’t exist. Maybe it’s important to optimize a statement for its disinfo potency. Try it harmlessly on LLMs fed event data. What happens if some oligarch retires unexpectedly? Who rises? That kind of stuff.
To your last point, with this executive branch, I expect their very first question to CIA wasn’t about aliens or which nations have a copy of a particular tape of Trump, but can you make us money. So the approaches above all have some way of producing business intelligence. Whereas a Kim Jong Un bobblehead does not.
Unless the world leaders they're simulating are laughably bad and tend to repeat themselves and hallucinate, like Trump. Who knows, maybe a chatbot trained with all the classified documents he stole and all his twitter and truth social posts wrote his tweet about Ron Reiner, and he's actually sleeping at 3:00 AM instead of sitting on the toilet tweeting in upper case.
As an ego thing, obviously, but if we think about it a bit more, it makes sense for busy people. If you're the point person for a project, and it's a large project, people don't read documentation. The number of "quick questions" you get will soon overwhelm a person to the point that they simply have to start ignoring people. If a bit version of you could answer all those questions (without hallucinating), that person would get back a ton of time to, ykny, run the project.