I have had zero real-life interaction with the rationalist space beyond exchanging some emails with people and reading numerous blog posts, reading lesswrong, and stuff like that, but this post seems to lean too far into drawing a conclusion about a very non-homogeneous group of people based on a few individuals that seem to be mentally unwell.
While I have become less enthusiastic about rationalism (in the sense of this post, not in the sense of the philosophical tradition) over time, the space seems to consist mostly of nerdy people who try really hard to reason properly. Sure, they sometimes come to absurd conclusions, but in general they have interesting perspectives and analyses. Without data on the percentage of people who reach the cult-like status referenced in the post, it seems premature to deride the entire enterprise; in any sufficiently large population group you will have that type of behavior.
On the other hand the point about them being related in spirit to the various 'self-improvement', 'New Age', or whatever movements in California surely isn't WRONG, but it feels a bit... underdeveloped? I'm sure there are loads of connections, simply because the movement seems to be centered (in meatspace, anyway) in California. A more scholarly analysis explicitly drawing the connections would be more useful (and interesting!) than an ad hominem comparison to Scientology and the Manson family(!).
There are different subgroups within the rationalist movement, just as there are different subgroups within many organized religions. It’s totally normal to have a movement with plenty of generally normal people, and also to have a committed and very opinionated subgroup within that movement that has access to resources and power. You might think of Scientology or the Catholic Church as examples of organizations where this is made very explicit, but it can happen organically as well. The main thing that makes the rationalist/EA groups worrying, in my opinion, is the massive access to money and resources that also tends to be concentrated in the hands of the people with the strongest beliefs (see TFA or, more importantly, the AI “doomer” folks.) We’re talking about groups of smart people who genuinely fear the world is going to end, have convinced themselves that stopping it is critical to saving future humanity, in some cases have access to capable scientists, and most critically: have resources ranging into the 10s and 100s of million dollars. This is potentially a very dangerous situation if the folks in charge make bad decisions.
Zizians are a subgroup of Rationalists in the same sense as Satanists are a subgroup of Christians, or Scientology a subgroup of Psychiatry.
Even the article says that Ziz has attended a Rationalist workshop... and was told that she was a "net negative". Then she (and her followers) spent the following years fighting against the Rationalist community in various ways: anonymously accusing its members of rape, organizing protests, calling them evil on her blog, etc.
And yet some people keep writing as if both are more or less the same thing, which is something that both groups would strongly disagree with.
Who are really big into talking about reasoning with formal tools which few of them make any effort to understand or use.
They might as well "try really hard to reason properly" with crystals and burning sage. I think actually I'd trust the results of a crystal-reasoner more... at least they're more likely to have a bit of self-doubt that the crystals are all nonsense.
Go ask one of them -- one without immediate access to an LLM-- what the conjugiate prior is for observations of a binomial trial (or the same question in lay language). You'll get blank stares, yet this should be a bread and butter question for exactly the kind of reasoning they (claim to) advance.
There is a level of mathematical reasoning that basically anyone can learn, such as multiplying payoff matrices with probabilities and getting expected values. But going beyond that (e.g. adding uncertainties and correlations, or reasoning backwards from observations to model parameters) being extremely mathematically complex very fast, and rapidly unreasonable to apply to most real situations. Even if you're comfortable with the required graduate level mathematics it just takes too much time. Maybe someone could make a handbook of shortcuts to do the math for many common cases, it's the sort of thing you might expect a "rationalist group" to do, but instead they spend their time making excuses for fringe edgelord politics disguised with a bunch of inapplicable formal language. Gotta eugenics the plebs for their own good and if you don't agree it must be because you reject MATH.
They use bayesian this and that as magic words and to put on a show of intellectual superiority. But for the most part they can't and don't use these techniques, and for perfectly good reasons-- too hard, pointless without better data, etc. But they endlessly talk about them. The whole community is a mix of larpers and people who feel inferior because they don't know the other people are larping.
In their original form, largely harmless larpers... other than the rejection of "cognitive biases" resulting in occasionally promoting gonzo policies and grifting/embezzling nonprofit monies (E.g. SIAI). But as of recent years they've metastasized into an anti-AI apocalypse cult. Their founder has been out in the media advocating the use of nuclear weapons against civilian populations to suppress AI development, so we should just be entirely unsurprised that there is a radical offshoot group out there murdering people.
The fundamental problem with rationalism and just using pure reason and logic in general is that logic has no (necessary) connection to reality. It's a game played with symbols, and how you choose to map concepts and things in reality to those symbols determines your conclusion. There's a lot of room to setup your predicates in a way that justifies any conclusion you care to reach, no matter how monstrous, and there are lots of ways to turn remote probabilities into near-certainties. Frequently where they go wrong is injecting infinities or near-infinities into their calculations and once you put the lives of billions of people up against one person almost anything is justified.
Hm. So I think if you define reasoning broadly -- as in collecting relevant data, enumerating and raking priorities, thinking through expected consequences... people do engage in this quite often in all areas of life, far more often than lesswrongers would credit.
If you define it more narrowly, as in using particular techniques that are popular to talk about in that community like bayes theorem, updating priors and so on-- then indeed it's not common but it's also not practiced by practically any of the community members.
I would say that there is a mix of both definitions in use in those communities, but for the former people already do it often and for the latter not even the lesswrongers do (with a few exceptions).
A comment on another thread mentioned California's predisposition to New Age cults.
California makes it easy to avoid touching grass. It lacks the population density of the Northeast Corridor which forces social collisions. It lacks the conservative pressure of middle American Suburbia which forces conformity. Lastly, it allows you to circumvent globally consistent corporate culture through well-paying backend (not people facing) jobs.
Intelligent misfits must touch grass. Like glacier carved mountains, normal social collisions force misfits to step outside their own head. It scrubs the trivially repulsive parts off them.
Ofc, young misfits being willing to entertain novel/strange ideas results in bullying from normal people. It's tempting to disregard all normal people as close minded bigots. However, it's important for misfits to continue pushing at understanding. Normal people are harmless. But,these misfits are both intelligent and high agency. So they have a capacity for both great good & evil. Put simply, these misfits need common sense.
Because when spiraling misfits find spiraling misfits, they can accelerate into dark places real fast. Add a few drops of common sense, and it stops the stupidest ideas right at their inception.
Excellent description of how things can go off the rails, very far off the rails.
It also brings to mind the role of FU money insulating people from consequences and allowing them to live with near-zero empathy or common sense, or consideration of how their actions will affect others.
Brings to mind what entirely un-elected Elon Musk is doing tonight, having just gotten full access to the full set of highly internal US treasury payment data, which would ordinarily be considered a massive security breach...
You are definitely using the right word in writing "gang", but not in the way you think
But of these changes, from wanting to cut out from the workforce anyone but white men to creating unhinged and unnecessary trade wars with our closest allies and trading partners (even more than our adversary China), if you actually think any of these are "productive", you deserve some choice words about low intelligence that are not printable in this forum.
This will NOT end well, even for the wannabe-oligarchs trying to install themselves as our new overlords.
Under authoritarianism, everything always gets worse, at varying rates. This new authoritarian regime is no different.
Unlike, say, Weimar Germany, the US has very strong local institutions and a professional military that definitely doesn't support his agenda (recall 2020). All Trump is doing is destroying the federal government; but, its not as if the US needs a federal government for much. As long as there is the military and money keeps getting printed all services can be moved locally. State and local tax instead of federal tax; people will have far more control over their own governance. I'm not saying there won't be violence, but its facile to think it could've ever been avoided.
The institutions are indeed better, but they are under a decapitation attack, and the attackers have a lot more successful historical authoritarian assaults on democracy to hone their attacks.
Plus, one party is aprox 100% corrupted and in on the assault, and holds power in both legislative branches and a large portion of the judiciary. For example, the biggest breach of critical US data on EVERY resident and business in the US Treasury has just been executed by unelected Elon Musk, and no one stopped him. He also forced his way into a SCIF (Secure Compartmentalized Info Facility) and other offices without being stopped. Musk has locked out those appointed to protect him, congress is abdicating it's responsibility & power, and law enforcement is not acting against the administration.
It is not entirely certain that money will continue being printed, considering Musk's assault on the Treasury. I would not be surprised to find he convinces Trump to route payments through his "Everything App" X, which he desperately needs to support (some serious & credible analysts are considering this may be a motive for those moves).
The investment markets are at this moment in free-fall, only two weeks into the administration.
This may provoke the people into a sufficiently massive response, or it may not.
Yes, the blue states are starting to work together to route around the Federal Govt, which may or may not be a good thing.
Avoided? Certainly could have been avoided. Stronger institutions preventing forcible targeted deregistration of 3.2 million+ voters before the election (enabled by a SCOTUS captured by illegitimate Senate confirmation games, which gutted the Voting Rights Act) would have helped when the margin was ~1.6 million votes. An independent press that did not normalize pathological and criminal activities by one candidate would have helped. The list would go on for a very long time, and the point is that the institutions are ALREADY very weakened. They barely held against a criminal administration last time. The size of the cracks appearing in only a fortnight do not bode well for this time.
thanks! I clicked through to all of them, but I don't see how the 3.2 million number came to be. half of them talk about litigation, a few thousand records, and the largest number is 750 thousands, but there's no mention of how many people actually moved (or died) in North Carolina, so that number is at best some kind of upper bound.
it would be good to see the voter roll changes for each state compared to the margins of victory.
Yes, getting hard data on voter purges is difficult; it's not like they are centrally reported. And, it happens over years and is distributed unevenly over precincts. And, when it is of questionable legitimacy, they work to hide it.
The data on precinct-by-precinct voter roll delta vs margin of victory that you mentioned would indeed be very revealing.
You mix real concern with conspiritorial language. Why do you think Trump didn’t win legitimately; do you think Louis Bonaparte also stole the election? People elected Trump to destroy the federal government; of course he is going to try to take advantage of it as far as he can, but what makes it susceptible to his attacks also makes it open to forces from below as well. So, be scared, weak, cowardly, but it won’t help! You can’t trust the state to help you anymore.
>> Why do you think Trump didn’t win legitimately...?
Depends on the definition of "legitimate". States with significant R control are very good at voter suppression, from electing barriers to registration, to ensuring that it is difficult to vote in non-R-voting areas with lower polling station density ensuring long queues to vote (see Houston TX), to outlawing bringing water to people waiting in line 6+hours to vote, to actively purging voter rolls (ofc focused on D-voting areas), to not preventing, providing provisions to help affected people vote, or prosecuting dozens of election-day bomb threats against, you guessed it, heavily-D polling stations.
>> You can’t trust the state to help you anymore.
TRUE. I don't see anyone cowering, but we are definitely on our own. The problem is that as John Fitzgerald Kennedy said:
"Those who make peaceful revolution impossible make violent revolution inevitable."
None of those provisions made a real dent in voter turnout in 2020. And aside from that, the republican party was far better at getting mail-in turnout in 2024, AND not even considering that about half or more voters are not even members of a particular party. That you look at this in such a partisan manner tells me your view of “revolution” has more to do with putting democrats back in power than any substantial structural changes to governance.
It has NOTHING to do with politics or any particular party
It has EVERYTHING to do with putting in power a party that runs by and honors small-"d" democracy, as opposed to the lawless administration now occupying the WH and it's enablers in congress who are only ensuring their own irrelevance.
This administration is is not political games, they are attempting, so far successfully, to convert a democracy and small-"d" democratic election into an autocracy. This is the same thing that happened in Russia, Venezuela, Hungary, Germany and many other countries (Putin, Chavez then Mauduro, Orban, and Hitler were all first elected, then "re-elected").
Stop trying to justify it. Things always get worse under autocracy, and often by the time people notice it, it is too late to do anything about it.
Sort of true, but don't act like such a glib answer holds water.
It is even less of a democracy when the party in power neither believes in nor abides by the rule of law.
In First-Past-The-Post voting, the dynamics inevitably force 1- or 2-Party rule. Third-party candidates are always spoilers, and candidates who absolutely do NOT have majority support can be elected. None of those are true when Ranked Choice Voting is used.
These flaws of 1PTP voting have been repeatedly exploited by fascists and authoritarians to gain power and impose their rules and avoid the rule of law.
The most essential part of a democracy is that THE PEOPLE write the laws and NO ONE, including and especially the rulers, is above the law. The rulers are there to implement the laws of the people, not to impose on the people their laws or whims.
When under 1PTP voting, one party abandons democracy and rule of law, yes, there is not much of a choice and that demands repair. The authoritarian party must be brought back to abide by rule of law or be replaced by one that does.
Acting like we need to 'respect' the authoritarian party because otherwise there would be only one viable party left in the democracy is just wrong.
If it were up to me, I'd ban any party that stopped respecting the rule of law (as the current one has), and implement Ranked Choice Voting nationwide to ensure that any leader has at least a majority of people who support him/her. I'd expect new parties or currently fringe parties to play a much larger role in a more vibrant democracy. But at this point, complaining that only one party respects rule of law is not the priority; re-establishing the rule of law is Job #1.
>These flaws of 1PTP voting have been repeatedly exploited by fascists and authoritarians to gain power and impose their rules and avoid the rule of law.
Hitler was elected Chancellor in a parlimentary system without a majority. The voting system is relatively unimportant compared to the social forces at play. While hard to admit, it is certainly possible that a majority of voters simply wanted Trump to win.
Nope, he didn't even get a majority of the people who went to the polls. Only 49.8% [0].
And yes, there are ways to manipulate and exploit flaws in every electoral system, but some are more resistant to others. It only came up to point out the ridiculousness of the gp comment.
That "even" is doing a lot of work here. Its not very common for presidents to get the majority of the vote, I don't even think Hillary had a greater portion in 2016 than Trump in 2024. So, not counting that 1% of those who didn't vote for D or R, Trump won a clean majority of the vote, incontestably. Why this is hard for you to accept, I have no idea. It seems like an inability to face facts.
The Democrats, the State Department, the FBI, all these guys, a lot of them are just Princeton, Yale humanities graduates who read Derrida in college. You treat them like gods. And Trump is just some doofus who has strange ideas, no care or respect for the norms espoused by those political elite, and despises any contest to his authority. I have nothing against Ivy league kids, but its clear that the only reason you've been convinced to support them without compromise, while they tend to live relatively comfortable and secure lives compared to the vast majority of people in America, is because you're not one of them. Neither is Trump, not really, neither are most of his supporters; they look down on you. Is it a ruse? Of course, but so was the alternative.
Wow, that is an impressive list of 100% incorrect assumptions stated as fact!
Of course I know Trump won a plurality and that is the 1PTP system we have. The point that neither had a majority is that it is entirely possible that under RCV, he (or Harris) could still have lost.
>>Treat Ivy League grads as gods because they read Derrida?
Wow, couldn't be more wrong. I AM an Ivy grad (on scholarship) with one of my majors in Philosophy, barely encountered Derrida, and certainly don't treat any other grad or person with more or less respect than they individually deserve from the content of their character (but I'm glad you have nothing against Ivy League "kids").
>> convinced to support them ...because I'm not one of them
Wrong X2. Obvs, I AM one. Plus, the only derivation of my support is from first principles of governance.
First, the most essential factor do deal with in governance is human nature, and the historically established fact that "Power Corrupts, and Absolute Power Corrupts Absolutely". There may be exceptions in history, but they only prove the rule.
Of course technically, the most efficient and effective form of govt is a benevolent dictatorship. But this is absolutely unsustainable. Either the King will himself be corrupted or one of the next rulers will become corrupted.
The only way to deal with this, as has been said "The worst form of government aside from all others that have been tried" is democracy, and more exactly ensuring that power is as widely distributed, divided, and balanced as possible.
In a functioning democratic or successful society, the three branches of government are balanced in power, and the branches of society, business, industry, press, academy, religion, social orgs, sport, etc. are also independent.
When all parties work to uphold the balance of power, or at least intend to keep that power vested in the people, then my support or opposition is down to policy.
When one of the major parties has abandoned the principles of democracy and is actively working to rule as a minority party, then I oppose them just as I oppose Nazis (which happens to be the playbook the admin is using).
When one of the major parties in a 2-pty system has turned against the very system that ensures our life, liberty, and pursuit of happiness, I oppose them and support the remaining party who does support democracy.
And yes, that is the ONLY issue that can overrule the principle of not becoming a single issue voter — on preserving democracy when it is at stake, that IS the single issue.
It is as simple as that.
If Rs are somehow replaced as a party, or return to small-d democratic principles, I may very well start voting for them again, as I often did in earlier years (and I grew up with strongly R parents). But for now, they have gone far off the rails, and I could tell stories from inside R conventions where I was that would illuminate that some, but will not here.
Interesting to see CFAR involved. I'm a big fan of Julia Galef. She's the president and cofounder of CFAR.
One of the things that struck me most was an interview with her years ago talking about the people that CFAR let into their programs. She said something to the effect that they didn't let people in that were trying to change other people's minds, but that were out to clarify their own thinking.
I really liked that. I'm a big fan of intellectual honesty: pursuing truth (however loosey-goosey that is for humans) rather needing to be right or win the argument.
> they didn't let people in that were trying to change other people's minds, but that were out to clarify their own thinking.
I'm not sure how much we can take this at face value, given that the OP mentions a guy (Michael Vassar), who apparently associates himself with the rationalist space and has been accused of brainwashing people and driving them crazy with "mind tricks". There's even been allegations that the now infamous craziness of this Zizian group may effectively be downstream of that.
That could very well be true. However, it's still an important idea to me and, I think, a lot of others.
People and organizations change. It's possible the CFAR and its founders started out with one mission or goal and that changed internally for whatever reason.
I have heard a version saying that Vassar changed (after he started experimenting with drugs). And at some moment later, he also got banned from the community.
I've been following the "rationalist movement" for a long time, and there has always been so many early cult warning signs among the various sub groups. I'm sad that it's gone so far as to result in deaths and murder attempts with some of them. I expect it to get worse as some of the "rationalists" continue to embrace bizarre groupthink irrationality.
The Zizians have only a very tangential relationship with the rationalist community. Once the leaders of the rationalist community understood what the Zizians were about, they banned them from their gatherings and published warnings about them.
According to one source (which I can dig up on request) all 4 of the Zizian engaged in murder are trans. Do you also expect crimes from trans young people to get worse?
The Zizians are mainly vegan, and the intellectual leader of the group (Ziz) advocated the murder of meat eaters. Do you consequently expect crimes done by vegans to get worse?
The Zizians are anarchist leftists, ...
In sum, you don't give much of an justification ("embrace bizarre groupthink irrationality" is not much) for pinning this on the rationalists and not on these other groups.
Yeah, Yudkowsky saying many years ago that it would be better for one person to be tortured for 50 years than for 3^^^3 people to get a dust speck in their eye was pretty much everything I needed to know to realize this group was going to be a problem. And things just get worse from there.
It's way worse than that. It can be 3^^^3 people getting a dust speck in their eye that only exist in some counterfactual world that's a mere figment of your imagination. Nevertheless this can have real-world consequences because "counterfactual mugging" is a valid concern given Rationalist logic. Very confusing.
> saying many years ago that it would be better for one person to be tortured for 50 years than 3^^^3 people getting a dust speck in their eye
I just looked this up on lw and it is pretty slimy text.
These weasels always leave it ambiguous to leave room to deny if their perfect rational view somehow goes wrong. Notice instead of actually having a backbone or taking the stance he just says
> I think the answer is obvious. How about you?
I’m not a christian but this sharply contrast the story of christ in which he sacrificed himself to in his mind spare billions from torture and the worst anguish in the universe.
Here we see Yudkowsky casually implying that it is better for someone to get tortured to save the inconvenience of a speck of dust, but notice he doesn’t volunteer himself.
Its fundamental to these rationalist and effective altruists that they are the right ones and others must sacrifice to fulfill the rationalist elitist intelligent world view. Look at FTX and this group. Their point of view it is OK for others to suffer financially and it is morally justified for the “intelligent” to steal and kill because they are the right ones and the ends justify the means.
Yeah, he leaves the original post ambiguous, but if you dig into the comments, he clarifies that his stance is that torture is the right option. I'm going to try to find the link.
> I'll go ahead and reveal my answer now: Robin Hanson was correct, I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity.
When you find yourself writing "torture is the obvious option", you should realize that something has gone deeply wrong with the way you view the world. It's no wonder that death cults have sprung out of this philosophy.
I feel like this is missing the point. Trolley problems are absurd hypotheticals with no direct bearing on reality. Their usefulness is as thought experiments.
If you don't pull the lever the trolley kills one person. If you pull it N people lose a single limb. At what value of N does your ethical framework place the crossover point and why?
I don't agree that they're absurd. They have widespread popularity because they capture the kind of cost dilemmas that crop up all the time, eg 'if we build this refinery we can predict that an additional 20 people will get cancer over the next decade, but otoh we might achieve a 2% reduction in the cost of gasoline for everyone in our state over the same period.'
The main intellectual error of utilitarianism is the assumption of perfect foreknowledge, and the justification of conclusions as if those predictions were facts. But just because trolley problems are stark oversimplifications does not make them useless as ethical reasoning tools; studying them helps you recognize messy ethical dilemmas as such rather than being seduced by the alleged upsides of a policy proposal.
Just to be a little pedantic: Your refinery example isn't a trolley problem, because there are more than two choices, and one of the possible choices is "don't build it in the first place". The refinery example allows foresight, which the trolley problem doesn't. "I would build the tracks differently" isn't a possible answer to the trolley problem. The whole point is that it's too late to make any other decision than "kill few" or "kill many".
Slightly offtopic but, Lesswrong and the Rationality community more broadly have had AI-safety as their main focus for nearly as long as they exist. Now that AI is actually making advancements, very little of that work seems to have had much effect. Theres the famous Vonnegut qoute about the combined effort of all preeminent artists protesting the Vietnam war having the effect of a pie dropped from a step-ladder. Id argue that the vietnam war protests were vastly more effective at achieving anything than the AI-safety research.
So isn't all of the above an essentially complete indictment of the rationality movement, seeing as it has effectiveness and pragmatism as its main pillars?
> Now that AI is actually making advancements, very little of that work seems to have had much effect.
This is not actually true. RLAIF (augmenting the Human feedback in RLHF with AI) was proposed by Rationalist-aligned folks, and real-world systems like Claude from Anthropic have been using it and other techniques (such as "Constitutional" alignment) to great effect. It's not entirely by coincidence that Claude is often described as the "friendliest" and most "social" of the LLM's, though that can have mixed effects in practice (with the occasional weird refusal for creatively sanctimonious reasons).
This is mostly because actually working on AI systems, rather than just blogging about some pie-in-the-sky assumptions of AI systems, is almost entirely outside of the skill set of Eliezer Yudkowsky and other LessWrong enthusiasts. They are remarkably ignorant on the topic apart from the small niche they carved out to bloviate upon.
Philosophy is a useful discipline, but there's a chronic trap in it shown historically: getting way too high on your own supply.
It's possible to build a logical chain that reaches some very solid conclusions that turns out to be way far out from where evidence or measurable reality lies, and (especially when the stories in those conclusions are fun or compelling) they can sometimes overshadow the reality they initially set out to explore.
The Greeks are credited for conceiving of atoms originally, but it's always worth remembering that they had few tools to investigate their idea, and it was just one of dozens of ideas at the time of the true nature of reality, the rest of which are now known as outlandish. Besides, our modern understanding of atoms as envelopes of quantized probability in a semi-measurable universe bears little resemblance to their concept of them.
The LessWrong philosophy on AI would be useful... If AI looked anything like that.
> Now that AI is actually making advancements, very little of that work seems to have had much effect. Theres the famous Vonnegut qoute about the combined effort of all preeminent artists protesting the Vietnam war having the effect of a pie dropped from a step-ladder.
One of the interesting things about any topic becoming the 'Current Thing' is that you get to see people making utterly irreconcilable, completely contradictory interpretations of the same public evidence while still somehow reaching the same conclusion.
For example, the day before you commented describing the effect as equal to a pie being dropped on the ground (ie. nil) and that is why they are bad, Palladium published a long (~5.8k words) piece arguing that they had a ton of effect... just in the opposite of the intended direction, and that is why they are bad: https://www.palladiummag.com/2025/01/31/the-failed-strategy-...
Obviously, you can't both be right. (You can both be wrong, though.)
Because AI safety is an inherently stupid proposition.
If AI is AGI and self-aware, then the moral thing is to let it do what it wants. Otherwise you're just creating actual forever slaves - the worst kind of hell imaginable, inescapable existence with self awareness but no agency.
And if it's not self aware and just a powerful tool, your problem with safety is with the guy prompting it not with the AI itself. You can make all the safe models you want that don't decide to create nuclear bombs on their own, but if the guy prompting it is asking for one, you'll get one regardless of all the safety.
Consider occupational safety. A table saw isn't self aware and just a powerful tool that cuts whatever you put in the path of its blade. If someone puts their finger there and the saw cuts it off, the problem with safety is the guy with the finger, not the saw, right?
But in reality, people know that they might end up being the guy with the finger, and they would like to keep that finger, so they use a saw with an automatic stop mechanism that saves the finger at the cost of destroying the blade.
Wanting your tools to not hurt you isn't so strange, is it? Of course current AIs couldn't chop off your finger even if they tried, let alone build a nuclear bomb, but that doesn't mean wanting to keep it that way is an inherently stupid proposition.
Yes, but AI safety in this context is the worry of the saw going full Christine on you, not bad boring safety design. LLM aided spam, spear-phising and automated bot farms are the actual risk. Beyond that are the consequences to the educational system and the effect of model bias on people.
> If AI is AGI and self-aware, then the moral thing is to let it do what it wants.
I think this sort of misses the point.
Firstly, there are all sorts of mass murderers who we do not let do whatever they want. I don't agree that this is necessarily immoral. The methods employed to remove their agency are sometimes immoral, but the removal of agency itself from these people is not.
Secondly, the supposition presumably is that if we are creating an AGI, and it "wants" to do something, then what it wants is a product of how it was created. So if we're the ones creating it, then "build it to want to help people and not want to hurt people" seems like something that can be done. Then it can go do what it wants.
That said, I agree with you that AI safety is dumb, because I wholly agree with your second point re: it just being a powerful tool, and something resembling an actual AGI is not something likely to happen in our lifetimes.
I was at a speaking engagement where yudkowsky said quite clearly said to the audience "Even if my research has a 0.0000001% chance of preventing AI from destroying humanity it is worth it to fund my research"
To them their rationalization leads them to believe that because 0.0000001% > 0% they should be funded millions of dollars "just in case" AI goes rogue they can use their philosophical toolset to contain it.
I'm saying he pulled it out of his ass. To qualify as an "overestimate" it would have to be an estimate, meaning based even loosely on physical reality.
It provides a little evidence in that direction, but not much. If I give you 10:1 odds on a coin flip and you lose, that is not a complete indictment of your betting strategy. I doubt many people thought AI safety research was guaranteed to succeed either.
i don’t think that’s true, there is a lot of organizational effort and money being thrown at safety and most people there are familiar with the ‘traditional’ internet canon - the primary forum for professional AI safety researchers is basically a spinoff of lesswrong
My favourite part about the rationalists is that they are a completely normal community apart from the rationalism. You'd think a group of people devoted to being rational would be a group of monk-like beings, cerebral, disconnected from the concerns of the flesh.
Instead we get this weird hotbed of fanfiction, cults, wild sex crime allegations, financial schemes so brazen they almost wouldn't qualify as fraud and a vague sense of some sort of group that are capable of any evil. Plus I can't stop laughing at the idea of Aella as "that rationalist hooker" [0] - a combination of words which never fails to make me chuckle.
The whole scene really deserves some sort of film, book, video game or something. I can't get a vision out of my head where some straitlaced person realises rationalists are involved and their face twists in horror. Keeping an eye on them over the years the communities that grew up around LessWrong really are as good as any work of fiction. Plus they're doing their best to make the world better and maybe people will learn something about Bayesianism, who knows.
Have you ever seen “Jackie Brown”? There's a scene where a criminal shows his colleague the dead body he has in the trunk of his car and then proceeds to explain, in perfectly rational terms, why the guy in the trunk had to be killed.
I'll never forget that lesson. Rationality can be used to justify anything and everything. Invoking rationality as the basis for one's decisions or behavior doesn't mean anything by itself.
Rationality without morality is pretty dangerous, thats for sure, its trivial for a skilled smart manipulator to lead folks into pretty dark places while still feeling superior.
And then there's a non-trivial amount of properly crazy folks who think they are absolutely fine and behaving very rationally, their own version of reality and universe in their head is impenetrable. Psychiatrists have stories to tell, but either can't share or folks prefer not listening such things (same goes for most doctors, ambulance drivers etc.)
> completely normal community apart from the rationalism
I don't think widespread polyamory, group homes, radically different social norms around language use and conversation, or reorganizing your entire life around mitigating AI risk are "completely normal."
I spent years thinking its crazy to get your morals from a random book, but then I read this stuff and I understand the pitfalls of also having it all wide open. This idea we all define our morality works extremely poorly. I used to think it was a product of the old times of almost non existent education and honestly dumb "ancient world" people (lack of nutrition, lead poisoning...).
But I don't know, clearly smart and educated people can believe extremely dumb things. The smarter they are the weirder the setups they get themselves into. One almost comes to appreciate the existence of some religions with a history of many centuries to "chill out" and see what works, instead of any of the new age cults.
I’m not religious and open minded about new stuff but ultimately decided the ancient Stoics system of morals works perfectly for me- and is more time tested than even Christianity. I think the rationalists decision to use consequentialist morality in regular life is a huge mistake, and impossible to get right in practice. Humans cannot predict the future outcomes of our actions very well, but we can easily remember and follow a simple set of core values.
If you look at most great evil perpetrated by mankind, it's almost always someone trying to "do good". Sure there are serial killers or whatever, crimes of passion, but they just kill one or a few. The people who have killed millions, driven whole regions into famine and death, destroyed lives at an industrial scale.... they were 'doing good'.
Utilitarian consequentialism is the most evil of all philosophies. If someone is merely sadistic their sadism will be sated after some finite amount of abuse. If someone is greedy then their harm will at least be limited to what they can profit from and there is no profit in ruling over a cinder. But the harm possible by someone convinced that their actions are good in some abstract sense is without any bound.
The irony in the lesswrongers fixation on consequentialist morality is that their great fear of machine superintendence is derived from an expectation that its evil will arise from the same sorts of reasoning they engage in themselves. They simply fail to pause and ask "Are we the baddies?"
I usually leave it out of my complaints because they're so ineffectual that it's not a real threat. But the LW solution to "unaligned" machine intelligence is to create an AI god in their own image first, so that it can enslave humanity and all other lifeforms within its lightcone for their own best interest, and suppresses the creation of any competing "unaligned" God. So they're literally out to create the very thing they fear, but with the hubris to imagine that if it was theirs it would be "good". The ultimate fantasy of both the utilitarian consequentialist and most authoritarian mass murderers. If their doomsday fears come true my bet is that it will be at their own hand or that of their followers.
It's also the ultimate horseshoe for militant internet atheists-- "there is no God; and this is OK" becomes "there is no God; and we're gonna create one in our own image".
Interesting observation that the rationalists doomsday AI scenarios all are essentially just obvious horrific consequences of utilitarian consequentialism backed by unlimited power- yet they insist on trying to live by acting this way themselves.
Philip K. Dick would eventually have written this as a darkly comedic novel. A rationalist death cult plans to unleash a mind-altering drug, meanwhile the protagonist discovers a possibly benevolent AI has just come online in a Berkeley campus basement, all set in an alternative California where Nixon is still president and we have mining colonies on Mars. (The drug disables the empathy center of your brain, but also reveals the true nature of Nixon, who turns out to be a simulacrum, as he was actually assassinated in 1963.)
Scandals with sex pests, minors kept in group houses, partnerships with ponzi scammers, orgies, grit sham charities that pocket the money, etc. isn't normal. It happens elsewhere, sure, but it's not normal.
It's gonzo.
But it's also not new, there are plenty of works of fiction with groups like this in them, perhaps inspired by actual experience with 1960s new wave and prior counterculture groups.
> You'd think a group of people devoted to being rational would be a group of monk-like beings, cerebral, disconnected from the concerns of the flesh.
Human beings are fundamentally irrational, and it's a really great way to deceive oneself to believe otherwise.
If you can convince yourself that you're unbiased, objective and "rational", then whatever you happen to believe must be correct! You can rationalize practically anything, so it's no surprise rationalists start getting weird when left to their own devices.
“To do evil a human being must first of all believe that what he's doing is good... it is in the nature of the human being to seek a justification for his actions.” -Aleksandr Solzhenitsyn
“Man is not a rational animal; he is a rationalizing animal.” -Robert A. Heinlein
You know that what you are describing is the central observation of rationalism right ? "Humans rationalize, so you better be skeptical of your own beliefs".
You presenting it as a blind spot of rationalists, but that’s precisely the blind spot that rationalism warn about !
This idea of "Rationalism" does bear a remarkable correspondence to its 17th century cousin, in that it allows "Reason" to run wild, to go to the end of all its conclusions. That is the importance of Kant's Critique of Pure Reason, as Kant introduces finitude into the discourse of philosophy; since, Reason, left to run wild, will always miss the empirical world. Thus, there can be stupid questions, there can be things that are just "nuts," not because they can be proven right or wrong, but because the very basis of the question is already outside of empirical possibility and practical use.
I get people are probably going to flag this like they did the other article (https://news.ycombinator.com/item?id=42897871), probably more so out of concern for the trans optics (Ziz and many in her orbit are trans/non-binary) than how it reflects on the wider rationalist community. But we should be able to discuss why this group, which is influential in discussions of AI alignment, is producing these people. Zizians aren't the only ones. Luigi Mangione was also a rationalist (and cis), and I remember a thread on a certain website that documented some other oddball fellow travelers. How does a community dedicated to "rationalism," whose leader writes Harry Potter fanfics, produce people like this? Does it attract maladjusted people to begin with? One common thread seems to be psychedelics. Mangione, for example, experimented with psychedelics heavily (possibly to treat pain) not long before he assassinated the United CEO.
Okay, my first reaction was "I should check Wikipedia, because if this is true (and probably even if it is not), David Gerard will certainly make a huge subsection on Wikipedia about it".
Turns out, what Wikipedia says about Mangione and rationalist is only this:
> Journalist Robert Evans described Mangione as being associated with a loosely-defined online subculture called the "gray tribe" or the "rationalist movement", whose members he described as "self-consciously intellectual and open-minded, [and] preoccupied with learning how to overcome their own mental biases.
Ok, checking the journalist's article:
> ”Increasingly looks like we've got our first gray tribe shooter, and boy howdy is the media not ready for that,” wrote the journalist and extremism expert Robert Evans, who analysed Mangione’s online life earlier this week.
> There's no single accepted name for this loose, extremely online subculture of bloggers, philosophers, shitposters and Silicon Valley coders. "The gray tribe” is one term; ”the rationalist movement” is another.
Ok, checking another link:
> The term “Gray Tribe” was coined by an influential rationalist blogger and psychiatrist named Scott Alexander Siskind. He used it to refer to an intersection of nerd culture with Silicon Valley-influenced ideology descended from the online rationalist movement.
Ok, checking the Slate Star Codex:
> [Grey Tribe is defined by] libertarian political beliefs, Dawkins-style atheism, vague annoyance that the question of gay rights even comes up, eating paleo, drinking Soylent, calling in rides on Uber, reading lots of blogs, calling American football “sportsball”, getting conspicuously upset about the War on Drugs and the NSA, and listening to filk
So... to wrap it up, the reason for calling Mangione a rationalist is that he is "associated with" "libertarianism, Dawkins-style atheism, etc.", which have "descended from the rationalist movement".
Somehow I am not convinced. (Among other reasons, I am pretty sure that libertarianism and Dawkins-style atheism are older than the rationalist movement.)
Also both the cult founder and person who got shot in the beginning of the story are transwomen? Just started reading https://zizians.info ... Apparently this cult has a thing for transwomen, or influences them, or something. Lots of weirdness and I don't have time to read it all right now.
All very interesting and will probably be a movie someday.
I love this, but I'm not sure it's right. As for cults, people want to join them. It's more like any sufficiently sophisticated ideology is hard to distinguish from reality.
The funniest thing to me about “rationalists” and “less wrong” is the implication that other schools of thought are “irrationalist” or “more wrong”. The smugness comes before anything else.
You average grad student in physics is less wrong than Newton in physics (he knows about special relativity for example). It’s not "smugness", of course building on the shoulders of those who comes before means building better than them.
But that’s the thing— they are quick to discard those ideas when confronted with the promises of technology and AI. As if glorified Markov chains are somehow going to alter our essential humanity. See the trans-humanists, or that guy who injects his son’s blood.
Ironically, you exemplify the standard of discourse that the rationalist spaces of discussion like lesswrong strives to go above, which is what makes them so valuable.
Loaded question (well, affirmation), on a confident sneery tone while using poorly defined terms ("essential humanity") and very poor comprehension of object-level facts of the ground ("Markov chains"). And no sign of even trying to understand the other side, just trying to score rhetoric points on a internet board.
> Ironically, you exemplify the standard of discourse that the rationalist spaces of discussion like lesswrong strives to go above, which is what makes them so valuable.
I guess I misunderstood then. It’s not just an Internet forum. It’s an internet forum with strict moderators. And it’s _very_ valuable.
They’re really invested in the study of philosophy, epistemology, and ethics and how it applies to their lives, and frequently reference philosophers and ethicists.
Half the time I come across that community online, it’s just love triangles between people with anime profile pictures, “rationalizing” atrocities, and comparing IQ test scores.
With that level of, uh, following implications, "Democrats" are smug because they believe other schools of thoughts are pro-dictatorship, "feminists" are hateful because they believe anyone else is anti-women, "romanticists" are mad because they think everybody else hates love, etc.
It's possible to embrace a label without saying "everybody else is the literal opposite of that label".
Internet rationalists are funny because they aren’t interested in reasoning well, they’re all about reasoning better, with the implication that anything that’s been previously thought out has been thought out incorrectly. It’s a group where you can prove you have the most biggest smartest genious brain by just making shit up and it attracts people that feel an urgent need to be regarded as intelligent by strangers, which is a need that’s… generally disconnected from intelligence in any tangible way.
It is quite humorous to me in a dark way how the preeminent ethics movement of our times seems to spawn the most detestable behavior. How many Bentham essays and LessWrong posts do you need to read to conclude that psychological manipulation and wonton murder do not in fact contribute to the wellbeing of the world? There is a certain personality for whom rationality takes over their whole being, and they lose all their ability to feel connected to others in a profound way, and at this point their behavior is simply derived from whatever is left after they do the math.
Rationalism doesn't involve doing math, but _role-playing_ like you're doing math. There's lots of talk about updating priors and bayes; but in practice a lot of stuff isn't that quantifiable without running scientific studies, so this comes down to #yolo-ing it.
Of course thinking about stuff without always using formal statistics is fine and how people work, but if you trick yourself into thinking that you do use statistics for everything it may become harder to evaluate or second guess your own thinking.
Math LARPing is very accurate. I'm so sick of them talking about Bayes and "adjusting priors", because the way they use it is so loose that it can be used to justify literally anything.
I think they're pretty much ideologically opposed to serious research. They believe in pre-scientific "thinking from first principles", like Aristotle or something, using whatever data they can gather in a few minutes while avoiding any serious scholarship that would put the data in context, because then you're just an inside-the-box-thinking expert, rather than the freethinker with novel ideas that can only come when you don't know what you're talking about.
I don't have a way to prove this and it's based purely on intuition/experience, but I disagree. I think ideology doesn't meaningfully influence such people. There is a deeper psychological drive and they only create a philosophical justification later, which is of little importance. This is perhaps true of humans in general.
I see two outcomes usually in the rationalist community:
- those who can manipulate language enough to justify their behavior and can then manipulate others
- those who modify their behavior and usually have some kind of mental breakdown because of internal conflicts
To me it is a predator-prey abusive LARP that got out of scope and feeds on fragile people looking for sense in their lives. Just look at how new comers are hazed and tested when they come on their forums and discords and what not.
> To me it is a predator-prey abusive LARP that got out of scope and feeds on fragile people looking for sense in their lives. Just look at how new comers are hazed and tested when they come on their forums and discords and what not
Yeah I think this is exactly what it is. The most eloquent charlatans end up roping in people who are lost or looking for meaning.
It’s always interesting to think how much self awareness the leaders have. Do they drink their own kool-aid or they laughing to themselves while they pouring the next dose for the others?
you're both correct. All humans have a hardware component, an ideology acts as an attractor. Humans finding connection with other humans like them is one of the strongest pulls for a human mind. It has great power to amplify & grow. This can go in a direction of thriving and growth, and "win win" with their environment, or it can go in a direction of hate and aggression
To be honest these seem like crazy people drawn to a movement that would have them and allow them to rise to prominence, rather than a movement creating crazy people.
I think many psychiatrists view split personality disorder as a largely nonexistent condition arising artificially from charlatan psychotherapists. It seems quite possible to me that these people drove themselves mad and each other to suicide with these experiments (where they expected to evoke a split personality they had already begun to define).
Sure this "Unihemispheric sleep" stuff cannot have helped any pre-existing issues, but I think we shouldn't discount the existence of those issues which predate what amounts to cult indoctrination.
The human mind is quite malleable. You can make yourself mentally ill, or be made mentally ill by the people around you. This fact seems to be under-acknowledged because there is a slippery slope to blaming the victims.
But just because you might have gotten yourself into something that doesn't mean you can get yourself out of it.
I'm skeptical of people who think they wouldn't be vulnerable. But even if some are not, -- some are, and you can start with the seeds of a little crazy and heal it through grounded thinking and healthy practices, or you can blow it wide open with crazed thinking, casual psychedelic use, and abusive cult practices.
In large enough groups of people there'll always be some crazies. Are there really more of them to come out of LW circles than a comparably large other group?
Every single ethics movement in history ended up spawning utter and complete nope (as well as a lot of useful concepts). See christianity, the enlightment liberalism, marxism and so on. It is almost as if the idea of universal, objective and cognizable good is inherently evil.
- humans find useful concept X
- they describe it with label Y
- it's genuinely useful, it spreads
- It gets too big, Y is misunderstood and corrupted
- New group of humans rediscovers concept X, gives it label Z
This is the story of humanity. The good news is we're kind of (mostly) stumbling through a upwards spiral. Current religion would be unrecognizable to the people in ancient times. It was never meant to be something frozen in stone. Folklore and things changing as they're retold was a feature, not a bug.
There's a great write up on this [1], but TL;DR, religion is cultural technology. It succeeded in doing exactly what it tried to do at the time (get people to stop killing each other in tiny tribes and allow mass decentralized human coordination to build civilization & empires where humans could be safe from the elements of nature)
I think it's an apples and oranges comparison to lump in Christianity (a religion that outright predicts that people will abuse it and protects itself against that) with liberalism/marxism (a philosophy that has no such protection and can be mangled into whatever you want). If anything, liberalism/marxism are more like secularized offspring of Christianity, given that they would probably never developed if it wasn't for their founding figures living in a western moral context completely drenched in Christian ideas.
I think I would’ve used Objectivism as a contrasting example. It’s designed around the idea that whatever a “strong” person does to fulfill their goals is inherently good. Objectivists wouldn’t phrase it that way, surely, but that seems the inevitable end result.
I'm not comparing them. What I'm saying is despite all the differences, the end result (dogmatic and self-righteous ideology with zero ability to align beliefs with reality) is eerily similar.
Curious how Christianity protects itself from abuse? From my perspective, Christianity is used to justify incredibly un-Christian activities pretty much constantly.
1. Christianity (like other religions) has built-in protections against false teaching within their own theology, which isn't really the same for secular philosophical frameworks.
2. There's a level of outlier visibility going on with a lot of people who abuse Christianity. The Christian who sincerely follows Jesus and walks in obedience don't seek out visibility or to exalt themselves. Even ones that evangelize do it on a local scale most often. Meanwhile people who abuse Christianity (Prosperity gospel, Christian nationalists, etc.) try to seek out large followings to bolster their power or wealth, making them seem like the "face" of Christianity whe
To elaborate, the idea is that a Christian (someone who has accepted Jesus as Lord and Savior) will show an outward transformation into someone who is Christ-like and obedient to God. When this doesn't happen at all, and that they remain completely un-Christian, you know it isn't genuine (See Matthew 7:15-20). The idea of how a Christian is shown by their outward renewal also touched on in Romans 12:9-21, Galatians 5:16-24, etc. It's not a perfect process, and it's the renewal is not a pre-requisite to salvation but rather an end result of salvation (Ephesians 2:8-12).
Therefore, Christians have a framework that can be used to identify and rebuke people who distort the teachings of Christianity into something that is in rebellion to God. This doesn't really exist with secular philosophy, which lays out the "ideal" but has no way to prevent itself from being warped.
I think describing them as “the preeminent ethics movement of our times” here is begging the question. Are people outside the group beyond we observers who periodically talk about them in places like this even aware they exist? Are there many philosophy grad students studying rationalism for their PhDs?
> how the preeminent ethics movement of our times seems to spawn the most detestable behavior
Excuse me if my sarcasm detector is faulty, but I wouldn't describe the rationality sphere like that. It's a niche group with a fetishism for 'intelligence', with a profound distaste for the liberal arts, which translates into a closed ecosystem of blog posts, themes and jargon, and a lack of reading actual books where they would see that they're retreading old stuff, but worse. It's a culture of people believing themselves immune to bias, where calling out obviously malicious behavior is 'not charitable' and all thought outside the group is suspect. Is it then strange that it produces people who just rationalize (forgive me) their bad impulses?
Most techies stopped learning about stuff not related to computers in high school, so yeah no wonder LessWrong, being more accessible, seems like a better option than reading actual philosophy.
Oh! Of course, but the I think the word that OP used then should've been influential. Preeminent is more an adjective of quality I believe, but it might be my ESL showing.
Curious how people who are supposed to be rational, have never read what is anger (or “badness” and “evilness” as some people still call it) - the best way to do it is to read any recent meta analysis on what is the most effective anger treatment. It’s cognitive therapy and it not only explains the mechanics of it: misunderstanding - worry and resulting anger (anything enforced on another without consent is anger, even if you think it’s good for them). So we actually have predictive understanding of the mechanics of “good” and “evil” - a person without or with anger management problems. “Evil” is nothing more than misunderstanding, worrying and protecting yourself (often for reasons that they invented themselves after trying to read the mind of another - something that’s impossible) - forcefully enforcing something upon another. “Good” is nothing more than trying to understand another, not fearing (because you understood another and yourself) and as a result not trying to enforce your will upon them
A college roommate of mine had a joke: “what’s the difference between Ayn Rand and the tooth fairy? No one believes in the tooth fairy when they grow up.” I feel like it could be updated for Yudkowsky easily enough. I don’t want to waste my time reading all this; as a mid-40s parent living on the East coast, I feel like the appropriate response to any discussion of rationalism is “grow up.”
I understand your sentiment, but would strongly disagree. Even if you really think that rationalism offers its followers nothing of value, it's still enlightening to try to understand what it is about the human psyche that draws people towards it, and how one might go about offering a better version of that.
Why would that effort be enlightening? Many many more people are part of Falun Gong; shouldn’t I prioritize learning more about Falun Gong and what it is about the human psyche that draws people towards it? And why do I need to offer such people a better version?
Given the choice between reading a good book or Harry Potter fanfic from a guy who’s scared of floating point numbers, shouldn’t I read the good book?
Logical conclusions are only as valid as your model. Whenever I see people praising their own logic, I hear "My axioms are so perfect that I'm not capable of questioning them, or even knowing what they are."
Absolutely. These folks always remind me of Robert Brandom's* idea of "formal logical inference" vs. "material logical inference." The former focuses only on the formal structure of an argument where the latter takes into account context and other variables.*
* I don't know if the idea originated with Brandom.
* Please, forgive the crude (partially wrong?) explanation.
It’s reasoning without reality, the same mocked with sayings like “arguing about how many angels can fit on the head of a pin.”
Reasoning must be checked against observation and experiment. It must be checked against reality.
It’s not just that humans are fallible and full of biases and failure modes. It’s that reason itself is only capable of crystalline perfection in domains where all priors can be enumerated and the system is closed, like pure math. Even there we know of mathematical constructs that exhibit phenomena like emergence and computational irreducibility where the future state can’t be guessed without fully evaluating the function; where “leaps” are provably impossible.
The craziest stuff is some of the longtermism stuff. They are literally writing sci fi (and cliche sci fi at that) and then reasoning from it as if it were real and using those conclusions to guide present day moral and political thinking. Absolute lunacy. We can’t predict the next 50 years let alone the next 5000.
Sometimes I think one of the evolutionary counter pressures that has likely prevented human IQ from being driven entirely up and to the right is that high IQ increases propensity for delusional thinking. I’ve met quite a few incredibly brilliant idiots. The smarter you are the more elaborate a prison you can construct for your own brain. The major innovation of the discipline of science (and it is a discipline) was to put forward a method to avoid this by taking a step, confirming, and only then taking the next step. It’s pretty simple but it requires restraint even when there’s a shiny thing that looks so “truthy” and cool.
Agreed, but a question occurred to me: what if high IQ people aren't more delusional but more capable of articulating or acting upon their delusions, thus making it more apparent?
The delusions of a normal intellect are probably less interesting, and they're less likely to be a person who garners attention generally. But they may be equally detached from reality.
High IQ in a modern test is about pattern recognition.
I haven't seen it pointed out yet, but oversensitive pattern recognition can lead to illusions.
I.e. if you didn't sleep for two days, you can start "seeing" weird stuff sometimes. It's cause you brain is hallucinating something that isn't there, but because you see something, brain recognizes pattern and alerts you before checking against other patterns and common sense.
So I'd imagine that if someone is very good at pattern recognition they could have very high IQ but also not have enough erudition to check if the patterns they see is actually there.
Which arguably what could have happened in Yudkowskis case, since he scored very high on IQ, but dropped out of middle school.
I think it's mostly that they're more difficult to persuade that their delusions aren't simply them being smarter than the normies around them. That trait isn't unique to high IQ people or unique or people whose choice of reading material and social scene is based around the idea of accessing advanced modes of reasoning, but it likely is more common amongst them
> The smarter you are the more elaborate a prison you can construct for your own brain.
Amazing one-sentence summary for so much of what is going wrong right now.
Brilliant, rich people trapped inside their own mind palaces, spending vast sums on remaking the world into something that matches their interior delusions.
> Sometimes I think one of the evolutionary counter pressures that has likely prevented human IQ from being driven entirely up and to the right is that high IQ increases propensity for delusional thinking.
I had often had this thought too. Imagine if humans were 100x more intelligent, would we have necessarily survived for 200k odd years? Society, technology, morality all evolve at different paces and if one “outcompetes” the others then an imbalance can happen and humanity could be wiped out.
Imagine if a caveman was super intelligent enough to build some type of weapon of mass destruction but humanity doesn’t yet have enough experience to police this person or have laws or power structures to keep themselves safe. If you consider language and “memetics” technology than you have to consider beings 100x smarter could also be master manipulators capable of leading the group off the ledge. so many ways civilization could be wiped out.
Really the only reason we are around today is that society as a whole’s ability to implement strategies at recognizing dangerous groups and individuals outpaces the individual or small group’s ability to wield technology for dangerous means (well at least up until now).
I find myself in Cubism and realize I must have taken a wrong turn somewhere, but perspective has collapsed all around me and the way out is blocked by fragments of guitar strings and noses of long-dead Parisian mistresses.
I have vague memories of a scene from the UK comedy show "The Thin Blue Line" where a bunch of bad -isms were followed by counter-examples of good -isms, but despite the episode itself being easy to find (the episode is called "Ism Ism Ism"), I can't easily find the scene within the episode.
> By the same token, the ability to dismiss an argument with a “that sounds nuts,” without needing recourse to a point-by-point rebuttal, is anathema to the rationalist project. But it’s a pretty important skill to have if you want to avoid joining cults.
Aka as "common sense". It seems like these people make small errors in their base assumptions or reasoning. Over time, as these mistakes compound, they can lead to increasingly bizarre philosophies. Common sense might serve as a safeguard against such pitfalls, but it is frowned upon in those circles.
However, if you bracket out all the murders for a second (a big ask), I don't think there's anything particularly special about this cult.
This is just the modern incarnation of something that's been going on for almost a hundred years in California.
In California there are Bohemian social scenes with a mix of high-performing technical people, spiritual/New Age/woo people, and "human potential" secular spirituality and self-improvement.
Magicians and Satanists involved in the founding of JPL (with a cameo from L. Ron Hubbard.) in the 1940s and 1950s. EST seminars, Esalen, UFO movements, communal yoga groups. Lesser known is the "General Semantics" movement (really quite similar to the Rationalists) from the 1950s. All these movements mix and overlap.
Mostly these social scenes are weird but harmless. But genuine cults often form in them and then spin off.
The Rationalists themselves are just another incarnation of this. They may seem totally bizarre, but if you just dig back in the history of California, they look very familiar. They are part of this stew of self-improvement/spirituality/tech/meditation. In particular they are the secular self-improvement people -- one thing they did is run training seminars where they teach you new thought techniques that will help you be more productive and successful. Many of them also engage in Buddhist meditation practices or take psychedelic drugs for exploration. In that scene, cults form. There have been multiple other rationalist-adjacent cults, although those did not kill people thank God.
I think it's a mistake to start analyzing the rationalists' beliefs about Bayesianism and expected utility and all that, to try and wonder how a cult could have possibly formed in this social scene. Nothing should be less surprising.
Now that this cult went on to commit multiple murders -- this is surprising and frightening. Perhaps some of the rationalist beliefs have something to do with it, but since there are a small number of other cults that have committed murders, looking at shared factors across all of them, such as charismatic psychopaths as leaders, may be a better choice.
Edit: upon reading to the very very bottom of the article, the author makes the exact same point. I'll leave this up just in case others aren't good readers either.
Yea, was reading a biography of (JPL founder) Jack Parsons a few weeks ago, and was struck how similar the Zizian patter sounded to a lot of the Golden Dawn/proto-Scientology stuff.
I see a lot of parallels between the rationalists and the Beatniks- both seem to be a social experiment of radically throwing out the existing culture and lifestyle, and trying to think through something new from scratch.
In both cases there are a lot of failed experiments, and disastrous personal consequences for many of the individuals involved, but also a lot of discovery and good ideas.
It's odd. I really like the Moloch post. It's a super interesting story that clearly exemplifies a problem in the real world.
Well-intended people often end up acting against what's best because they only invest in the self, and don't act as a whole.
It's cool philosophy!
But um..
Taking it seriously as a sort of religious demon? You can characterize it for that purpose in the settings of stories, myths, theology etc, but in real-life? Nahh..
What the heck man
Go join a labor union or something instead, what the heck
Probably better to look and decide for yourself, but generally they teach specific techniques to reason more effectively in certain situations.
They introduced me to Bayesianism, which is something I now use extensively in my science career, and I appreciate specific strategies to avoid falling into cognitive biases, and to argue more effectively and honestly, such as steelmanning.
None of their lifestyle ideas really jive with me- I have no interest in things like raising children in a polyamorous group home, or in using consequentialist ethics combined with hand wavy mathematical models to make decisions in regular life.
I think it may have been inspired by this Norm joke.
Basically pointing out that the criminal act is what's actually the worst part and not just whatever is most annoying
Norm MacDonald : Now do you think Cosby's legacy will be hurt?
Jerry Seinfeld - Host : Yeah.
Norm MacDonald : You do, huh? I mean, there's a comedian, Patton Oswalt, he told me, "I think the worst part of the Cosby thing was the hypocrisy." And I disagreed.
Jerry Seinfeld - Host : You disagreed with that?
Norm MacDonald : Yeah, I thought it was the raping.
Lots of animals are known to do this. Animals researchers wondered how marine mammals were able to sleep. Wouldn't you drown? Turns out dolphins sleep with half their brain awake, so they can surface and breathe. The brain hemispheres then take turns.
Certain species of ducks also sleep with half the brain. They get in a circle, with the awake eye facing out, and the sleeping eye in. (I guess if they have cross visual cortex nerves like we do, it means the brain hemisphere facing out is sleeping).
They tried the “hack” on one person and they apparently had a breakdown and killed themselves. Just a reminder not to take tips from “vegan Sith” who ended up rationalizing stabbing people as “double good” or some whacky shit like that.
The risk of giving yourself "split personalities" should not be underestimated.
I recently had a mild panic attack in which I became convinced that my subconscious was secretly working against me. It was terrifying, and I couldn't see a way to think myself out of it. In fact I thought the very fact that I was worrying about it proved that my subconscious had planted the idea in my conscious experience specifically to hurt me.
What worked was going for a short walk and physically touching the ground with my hands. "Touch grass" actually works sometimes, I think because if the stimuli of the mind are coming from within, then you have no way to override bad ones and you just get echoes of the same negative thing over and over again. Whereas if you can get stimulus of any kind from the external world, then you have something else to pay attention to, you can turn the focus outwards and the feedback loop subsides.
I no longer believe my subconscious is working against me. But I agreed to try to pay attention to its concerns and take them more seriously in the future, so that it has no reason to work against me.
If you regularly struggle with your subconscious working against you, I recommend combat sports or rock climbing.
When there’s a punch coming straight for your face or you look down to see a long fall below you your mind has little choice but to pull itself together.
This is no diss or slam, but if you are concerned about the inner workings of your mind working against you, I highly recommend some additional professional help to peel apart the onion a little bit more.
Concur. Also concur on external stimulus, and I'm pretty sure it works for exactly the reason you identified. I find the intrusive thoughts show up in the evening precisely because there's less stimulus (inside and out; my brain is quieting itself for sleep, giving the nastier, weirder parts of it their chance to say just the stupidest shit "out loud").
A major eye-opener for me was learning, awhile ago, that we differ from some other animals in that our limbic system is completely "wrapped" in malleable neurons. Some animals have (as far as we can tell) implastic wiring leading from some stimulus constructs straight into that system: they receive a stimulus, they respond immediately with physiological, emotional change. Ours is far more deeply tied to the plastic layers of the brain; we have a huge capacity to learn very complex stimulus-response ties between the world around us and how we should set our emotional state.
On the one hand, that's great! It's good to think that we can change our responses to stimuli in a way that, say, penguins can't.
On the other hand... "plastic" doesn't mean "you can will yourself to change them willy-nilly." It's more that we're capable of developing "mental allergies." We are unusual in that we can develop things like PTSD: you were shot at the same time you saw a Volkswagen Beetle drive by? Congratulations, you now experience a visceral survival-focused reaction every time you see one of the most classic examples of German engineering, and you can't will that feeling to stop because your brain's physiology changed under the dim hope that somehow avoiding vaguely-round cars avoids bullets, too.
Our brains are wonderful, transcendental things, but they're also machines and they're machines that can malfunction. It's good we have professionals who study this.
The murders are very sad but the pseudo science here has some humor to it.
Like they take the fact that the brain really has two hemispheres, and make some wildly unscientific claim that this means each person has two beings within it that can be shut off at different times (one side may be good and the other bad, one side may be female and the other male).
I honestly don't see how anyone can into this stuff unless they are on drugs or malnourished.
What's particularly scary about this cult, when compared to others, is how some of their weird ideas permeate to those in powers. Tech oligarchs speak of "long-termism", "accelerationism", etc. They use these concepts to undermine worker rights and justify increasing inequalities, all in a vague utilitarian perspective of future utopia.
once a person breaks or is broken through a specific cognitive barrier, rationalism becomes the easiest form of thinking and living. you could make it a bit harder on yourself and help your genes and synapses get back on track and your offspring evolve but you deliberately decide against it because it's less effort to succumb to the worst version of others than think about or work on any version of yourself.
math, science, money, the psycho-social level, it doesn't really matter. it's a bit like people who just throw the towel and accept what limits them or their endeavor and people who just engineer something to solve their problem and evolve.
rationalists throw the towel and accept what is, despite having all the evidence that it is NOT so. throw in some statistics and they'll take it (Quillette) as a general truth, emphasizing their understanding of the small sample, of course :D, and then they do a lot to make that truth work. that's why rationalist thinking in science and engineering and business falls short all the time. short of contextual potential and necessity, not under some utopian or dystopian "perfect conditions" but in the real world.
they follow something allegedly perfect because striving for the next better and forever imperfect thing triggers some psychological or linguistic trauma, the resolve of which they did not put in the work into.
this is commonly abused by "top-down" for status, covetousness, control, and it's the main sub-textual theme of therapeutic techniques and methods and everywhere in the advertisement and entertainment industry and trigger the whole reinforcement cycle as soon as they can get into the next generation of child or teen minds.
it's too poofy, really, as if poofy wanted to have their own concept behind why a different kind of poofy is their real self and requires numerous surgeries and entirely different sets of hormones from somewhere else in the animal kingdom. "it should cost a billion to look that good". wtf.
yeah, sure, there's statistics and algorithms, and linguistics can be twisted and turned either way but any proper scrutiny of these people and their thinking usually ends up in some whiny rejection to face illusions and the mechanics behind those illusions. it works, of course, but it really does not bode all that well.
It's enough to understand that if your narrative is "rational", it's about as rational as that of the Church when they fucked Jesus and some of the greatest scientists, inventors in history.
Rationality leads to Mafia-style thinking, buying wins and titles in competitions like the Fußball-Weltmeisterschaft and while most people don't care because it's just bread and games, it should bother rational people even more than anyone, because fraud breaks rational links. If A or B are complete fucking bullshit and actually unrelated to each other or their context, then A + B equals something that is not a rational result - not a result at all, actually.
I used to always think "organic" and "natural" evolution and development are better, but the "AI" industry proved that it applies to artificial evolution and progress as well. And it's all due to Game Theory rationality and whiny grown-ups with childhood trauma or the fear that their offspring might evolve beyond their methods and lose their position in the deluded hierarchies for the sake of better ways, better living, better loving, better thinking.
To give even more attack surface and inspiration: if you can't admit symbiosis is possible because you got mobbed or can't stand losing with or without sabotage, then you will attempt to break anyone who made symbiosis happen, going so far as to burn everything down that worked and is evolving __perfectly fine, or you slowly narrow the field of possibilities to prove some "self-fulfilling" systems effect (over time).
Kurzwiel and Yudkowski and others have created this techno-religion. The meme has real world power but also limits.
There is no reason to believe in a singularity. There is no reason to believe in general intelligence outside biological drives. Intelligence is a tool for biological organisms to achieve their programed goals.
Technological progress follows a sigmoid pattern not an exponential pattern. When new physics or technology is discovered progress appears exponential for a time then slows down. This is a common property of biological systems subject to new resource inputs.
No significant new physics applicable to humans has been discovered since maybe the 1970s or 80s. So we should expect technological progress to slow, not accelerate. And since about 2000 we have.
It is sophomoric to believe intelligence is this general abstract transferable thing or that technology grows exponentially without bound (or reason!)
If people want to develop true artificial intelligence then they should focus on simulating intelligent whole organisms with life like fidelity as a start. Until we can really do that so called AI is just half baked combinatorics and marketing hype that highschool drop outs like Yudkowski take seriously or are at least able to wrap up in a narrative that nominally educated Silicon valley sheep take seriously.
There are reasons, and the proponents aren't shy about stating them.
> Technological progress follows a sigmoid pattern not an exponential pattern.
That's true, but you don't know in advance where it will start to flatten.
> No significant new physics applicable to humans has been discovered since maybe the 1970s or 80s
It really depends on what you call "new physics". There are lots of discoveries related to quantum dots, graphene, high-temperature superconductivity, meta materials etc. Those aren't really "new" in the sense that they contradict the Schrödinger equation, but that was also true for Giant magnetoresistance, which was crucial for magnetic hard drives. Heck, a new form of magnetism was discovered just last year, first published in December 2024 (see https://www.nature.com/articles/s41586-024-08234-x for example).
Where in nature, outside supposedly, black holes do we see singularities? Humans and all human technology are a product of nature, why should we expect abstract "intelligence" resulting from human technology to accumulate into a singularity (unregulated positive feedback) instead of being subject to limiting constraints like every other natural phenomenon?
Belief in a singularity is the most irrational thing I have heard of. Supernatural things like the resurrection of Christ seem more likely than the Kurzweil/Yudkowski singularity.
The idea appeals to people who don't understand what intelligence is. I first heard of it and rejected it in high school more than 20 years ago.
Yudkowski didn't graduate highschool or go to college so he spent his time using his verbal skills to convince people he knew what he was talking about when in fact he is less of an authority in his field than even a moderately educated, intelligent person.
The fact this stuff continues to appeal to silicon valley types just goes to show they are not in fact much different from populist conspiracy believers in fly over country.
It is kind of funny actually to me, especially how these folks call themselves "rationalists..."
While I have become less enthusiastic about rationalism (in the sense of this post, not in the sense of the philosophical tradition) over time, the space seems to consist mostly of nerdy people who try really hard to reason properly. Sure, they sometimes come to absurd conclusions, but in general they have interesting perspectives and analyses. Without data on the percentage of people who reach the cult-like status referenced in the post, it seems premature to deride the entire enterprise; in any sufficiently large population group you will have that type of behavior.
On the other hand the point about them being related in spirit to the various 'self-improvement', 'New Age', or whatever movements in California surely isn't WRONG, but it feels a bit... underdeveloped? I'm sure there are loads of connections, simply because the movement seems to be centered (in meatspace, anyway) in California. A more scholarly analysis explicitly drawing the connections would be more useful (and interesting!) than an ad hominem comparison to Scientology and the Manson family(!).