Effective Altruism is just a modern iteration of a thing that's been around for a very long time. The fundamental idea is sound. However, in practice, it all-too-easily devolves into something really terrible. Especially once people start down the path of thinking the needs of today aren't as important as the needs of a hypothetical future population.
Personally, I started "tithing" when my first business was a success. In part because it's good to help the less fortunate, but also as an ethical stance. Having a business drove home that no business can be successful without the support of the community it starts in, so it's only right to share in the rewards.
So, I give 10% back. I have rules about it:
I always give to a local group who directly helps people and who is typically overlooked for charitable giving. I get to know the group pretty well first.
I never give to any group that won't keep my identity a secret.
I never give to any group that asks me for money.
I don't always give in the form of money. Sometimes, it's in the form of my time and effort, or in material goods, etc.
I don't give to "umbrella" groups whose purpose is fundraising for a collection of other groups. This isn't because I have a problem with them, but because they're not the ones who struggle the most to get donations.
>Especially once people start down the path of thinking the needs of today aren't as important as the needs of a hypothetical future population.
It's not that that bothers me so much as the fact that many effective altruists do it so badly. We need to be concerned with the future. That is the only reason to maintain roads and bridges, to prevent pollution, or to conserve resources like water in aquifers and helium. But effective altruists are as likely to talk about colonizing Mars as they are to talk about global warming.
Effective altruism is supposedly about making evidence-based decisions. We have no idea how likely "existential risks" are. We have no idea what, if anything, can be done about them. We cannot predict a year into the future, let alone millenia. So-called longtermism is nothing more than guesswork.
>It's not that that bothers me so much as the fact that many effective altruists do it so badly. [...] But effective altruists are as likely to talk about colonizing Mars as they are to talk about global warming.
Are they doing it badly, or are you not understanding their arguments? AFAIK effective altruists want to colonize mars on x-risk grounds, which would explain why they want to prioritize that over global warming, even though the latter is happening right now. AFAIK they think that global warming is bad, but isn't an existential risk, whereas colonizing mars will mitigate many existential risks.
I've yet to see an argument for colonizing Mars for this purpose, that wouldn't be a better argument if the goal were instead "build robust, distributed bunkers on earth, and pay families to live in them part-time so there's always someone there".
Cheaper, and more effective.
Most plausible post-apocalyptic Earths would be far easier to live on than Mars.
The remaining threats that wouldn't also be pretty likely to take out Mars at the same time, would be something like a whole-crust-liquifying impact, which we'd have a pretty good chance of spotting well in advance, and we could put some of the savings into getting better at that.
I think a bunch of smart people are also just romantics when it comes to space shit, and that's why they won't shut up about Mars, not because it's actually a good idea.
Hell, building orbital habs is probably a better idea than colonizing Mars, for those purposes, if we must do space shit.
> Most plausible post-apocalyptic Earths would be far easier to live on than Mars.
thank you
how are we supposed to build a second home on a dead, ruthlessly hostile planet until we demonstrate ourselves capable of stabilizing the biosphere and building a sustainable long-term civilization here
Right—living on Mars is like living on Earth if it ambient surface radiation levels were significantly higher, nothing would grow in the soil anywhere without a ton of preparation, and you couldn't leave your house without a pressure suit. And there's no surface water. And the gravity's fucked up. And the temperatures suck for basically anything life-related. And none of the geological and chemical processes that keep our biosphere viable existed, at all.
So... like Earth if several apocalypses happened at once, including a few nigh-impossible ones. Except it starts that way. And it's actually even worse than that. Sure, maybe we could slam some comets into it and do a ton of other sci-fi shit over a few centuries and it'd eventually get better, sorta, a little—but c'mon, seriously?
> how are we supposed to build a second home on a dead, ruthlessly hostile planet until we demonstrate ourselves capable of stabilizing the biosphere and building a sustainable long-term civilization here
Because we can afford to make big mistakes in terraforming a dead, ruthlessly hostile planet.
Declaring Mars a “nature reserve” would be completely unenforceable. Suppose you convince US Congress to pass a law banning Americans from sending humans to Mars, due to the risk of contamination to native Martian microbes. What happens when China says “now is our chance to show the world we’ve eclipsed the US by sending humans to Mars when they won’t“? Even though such a Chinese mission isn’t feasible today (an American one arguably isn’t either), who can say what its feasibility will be in another 20 or 50 years? And if not China, then sooner or later somebody else. Sustained global consensus on this issue is unlikely, which makes Mars as a “nature reserve” meaningless in the long-term. On Earth, the vast majority of nature reserves only exist because some government has military control of the territory and hence can enforce that status.
> Declaring Mars a “nature reserve” would be completely unenforceable.
Generally worked pretty well for Antarctica. A few research bases are permitted but colonization is internationally banned and not happening.
BTW, colonizing Antarctica would be a lot easier than colonizing Mars. Far fewer technical challenges to overcome, and much more practical experience overcoming those challenges.
Many people who call for Mars to be declared a "nature reserve" aren't just calling for a ban on Mars colonisation, they are calling on a ban on crewed exploration – either of the planet as a whole, or at least of sites they view as "environmentally sensitive" (which basically turns out to be the most interesting exploration targets, and many of the sites which would most easily host crews). They are worried about microbial contamination, which is a rather different environmental concern from Antarctica, and requires much stricter limits on human activity.
When someone like Elon Musk talks about "colonising" Mars, all he's realistically talking about – at first – is a crewed research station, so not that different from what we have in Antarctica. And many people who want Mars to be a "nature reserve" are opposed to even that. Yes, Musk hopes that such a research station will eventually grow into a buzzing metropolis, but I think if that ever happens it is a long way off. Musk might live to see crewed research stations established, I very much doubt he'll live to see genuine colonisation, much as he enjoys publicly fantasising about that topic.
Even the ban on colonising Antarctica only really works because it is banning something no government wants to do anyway. Crewed exploration of Mars would be attractive in principle to governments because of the benefits for national prestige, getting in the history-books, outshining the competition – the same basic reasons why the US went to the Moon. Of course, that benefit has to be weighed against the immense cost – but costs aren't constant, with further technological and economic developments it is going to become more affordable.
All the groundbreaking exploration opportunities with Antarctica have already been used up, so governments don't have the same motivations there. And I think the first human visit to another planet, is going to be much more noteworthy and prestigious and memorable, than whoever was first to explore some big freezing cold island on Earth. A thousand years from now, most people will still probably remember who Neil Armstrong was; I doubt many other people from the 20th century would still be household names (I suppose Einstein and Hitler would be the other likely candidates)–its only been a century or two, but the average person has no clue whom the first explorers of Antarctica were.
fungel spores probaly are contaimination by what ever probe we sent there. any life evolving their would not be from any terrestrial evolutionary branch such as fungus
All this terror about "contaminating" other objects in the solar system with terran life is just misguided. We should all hope to get some terran life established on them.
Heck, we need to build probes full of selected terran extremophiles and spray them into the Martian atmosphere.
1. Any existing life there is, at this point, highly improbable
2. If there is any, how could terran life be competitive with it if the existing life has evolved to match the local environment over billions of years?
3. If there is existing life, how could a biologist not be able to easily distinguish it from terran life?
4. If the life there is ancient and now extinct, terran life isn't going to interfere with that
To answer part 2 with just one example: The native life of Mars, if it still exists, would exist in a state of homeostasis with its environment. It would have to in order to still remain existent. If terrestrial organisms were capable of replicating under martian conditions, they could easily eat everything up and then die off. Never quite getting the time necessary to adapt to the ecological limits of their new habitat. And by this process driving the native life to extinction as well.
To answer part 3: We're still discovering new kingdoms of life on Earth (though it's unlikely we'll discover new domains). If localized panspermia exists within our solar system (from meteor impacts or the like) it's possible martian life and terrestrial life are related enough for the martian life to fit within the already existing family tree of terrestrial life.
https://astronomy.com/news/2021/05/did-life-on-earth-come-fr...
2. They'd never eat all of it. Also, it the distribution of either form will never be even across the planet. There will be "islands" of one or the other.
3. Biologists are easily able to determine if they are new kingdoms are not. They're also able to estimate how long ago divergence from a common root happened.
There are many, many examples of parallel evolution in terran biology, but none of them are confused with each other. It's absurdly unlikely that a terran modern amoeba will be confused with a Martian amoeba.
2) Localized sure, I wasn't arguing about the entire planet. But introduced life could drive the native life to a local extinction. And if it did so fast enough we would never know the local life had been there.
3) Yes, I know. While this isn't my specialty I work at an organization that does have people that specialize in this. The difficulty would be in definitively concluding whether this is a native divergence that we've just never seen before, or the result of Martian evolution.
2. We've found fossilized remnants of bacteria in rocks, haven't we? There's also ice on Mars. If life existed, we'd find it frozen in the ice.
3. A billion years of evolutionary divergence, with local alien adaptations, is going to be very hard to confuse with anything brought over by a probe.
I personally agree but was responding to the claim that finding fungal spores would mean we would necessarily turn mars into a nature reserve and not touch it. i pointed out the a fungal spore wouldn't be martin but earthly in origin
>how are we supposed to build a second home on a dead, ruthlessly hostile planet until we demonstrate ourselves capable of stabilizing the biosphere and building a sustainable long-term civilization here
because its unique challenges and constraints may make us develop technology that we wouldn't otherwise that may in turn out to be useful back on earth. Much of our technological advancements come from military research, where there was no civilian demand. We developed computers so we could break encryption, we developed the internet as a successor to arpanet a military network originally made to maintain command and control in event of nuclear exchange. standardized clothes sizing was invented to make uniforms more cheaply. satellites were invented so we could spy on our military rivals. the entire space program was a spin off of ICBM program. Nuclear power came from atomic bomb research, we have developed many advaced prosthetic due to injured veterans needing them. weather prodiction thanks to radar to detect enemy planes.
But what if we could have something less self destructive than war that would germinate new technologies? Thats what a reasons to colonizes space. space colonization gives us many of the same challenges war does without the need for mass loss of life. Needs for new materials, new means of generating power, new modes of transportation. space exploration will give us new frontiers to strive against rather than find better ways to murder each-other. the ease of earth doesn't provide those challenges. I can dig up my back yard throw seed on the ground and will have vegetables to eat in the fall but i have learned nothing, find a way to grow food from lunar or martian regolith and you just have invented a way to rapidly create new soil and solved earths soil erosion problems ans well as found new ways of removing toxins from contaminated environments.
>I can dig up my back yard throw seed on the ground and will have vegetables to eat in the fall but i have learned nothing, find a way to grow food from lunar or martian regolith and you just have invented a way to rapidly create new soil and solved earths soil erosion problems ans well as found new ways of removing toxins from contaminated environments.
We already have no till agriculture. The plants terraform the soil for us. The problem with this terraformation is that you need to rotate the plants because every plant terraforms the soil in a different way. Plants don't deplete the soil unless you replant the same plant over and over again and destroy terraforming progress through tillage and harvest it and then never bring the poop back.
Some people will now say that avoiding soil depletion in the short term is a bad idea because it means using a little bit more land (or bring up incorrect numbers).
It goes both ways. Learning to live on a dead world like Mars (or better yet, off-world altogether) will necessarily entail significant improvements in recycling, atmospheric control, and energy management. Those same technologies could be critical to reversing the damage we've done to our homeworld and enabling us to live on it sustainably.
Do you imagine a Mars colony will not have its own politics? Politics is arguably motivated in large part by scarcity. On Mars resources are a lot harder to come by than on Earth.
You could make an argument that a shared struggle against extreme conditions would stabilize societies and make cooperation a necessity, but a Mars colony is going to need a lot of help from Earth to get on its feet, in which case we still need to solve terrestrial politics anyway.
> Do you imagine a Mars colony will not have its own politics
A Mars colony would be small by necessity. Everyone will know everyone else. In such communities "politics" is not nearly so abstract. For instance, you can't convince yourself that climate change is not happening when you personally know the experts in that field taking the measurements and crunching the data. There will always be disagreements but nowhere near the type of nonsense we see here on Earth.
Pretty much. I want to colonize Mars, but not as a solution to apocalypse scenarios. Long-long term it would help, but yea, it isn't the optimal solution to solve current problems we are facing now. Still want to colonize it, but just because it would be cool. Whether we take 5 years or 150 years to do it or 1,000 years to do it, doesn't bother me. Although doing it in the next 30 years would mean there is a good chance I could see it happen, but thats about it.
One big problem with Mars is that requires technology to live on. Most of the risks are civilization ending events not extinction level events. When civilization fails, on Earth the survivors bang rocks together, on Mars they die. It turns civilization ending into extinction.
One big question is if can make Mars civilization that would survive Earth collapse. It is possible that there are some things, like biological samples or advanced technology that have to come from mother planet. Mars will likely take a long time to become self-sufficient and until then it isn’t a backup. The easier self-sufficiency is, the less Mars is needed as backup.
The final thing is that colonizing Mars could introduce risks. It would involve developing technology that makes disaster more likely, like advanced AI, genetic engineering, or moving asteroids in space. Or could be adding a place for conflict leading to Earth-Mars war that destroys both planets.
I understand their arguments just fine. I just don't think they make any sense.
Ought implies can. We cannot predict the far future of humanity. We cannot colonize other planets in the foreseeable future. We cannot plan how to handle future technology that we aren't yet sure is even possible.
The things we actually can predict and control, like global warming and natural disasters and pandemics, are handled with regular old public policy. Longtermism, almost by definition, refers to things we can neither predict nor control.
>Are they doing it badly, or are you not understanding their arguments?
Do YOU not understand their arguments? They are facially stupid. The notion that we should be colonizing mars because of global warming is the stupidest thing I've ever read or heard.
>The notion that we should be colonizing mars because of global warming is the stupidest thing I've ever read or heard.
Yeah, because that's a strawman you imagined in your head. I'm not sure what gave you the impression that the two were related (other than that they're competing options) based on my previous comment.
> Yeah, because that's a strawman you imagined in your head.
I've read enough of that argument being made in earnest on this forum that I'm going to have to go with the parent poster.
There are many intelligent people who seriously believe in that strawman. (Although it's also possible that they don't actually believe in it, and are just making the argument because the purpose of colonizing Mars isn't increasing the resilience of Earth, but getting away from the hoi polloi. Those people will be for a rude surprise when they discover that they are also part of the hoi polloi.)
Someone posted it upthread. You can replace any catastrophic event with global warming and it's just as facially stupid. Literally like the thought process of a child. It's completely divorced from reality.
While I don't expect extinction from any particular given cause — and definitely not from any of global warming, nuclear war, loss of biodiversity, peak phosphorous, or the ozone layer — humans have a few massive failure modes:
1. We refuse to believe very bad scenarios until much too late. Doesn't need to be apocalyptic: The Titanic isn't sinking; all of Hiroshima must have gone silent because a telegraph cable was damaged and it can't possibly be the entire city destroyed, and even if it was the Americans can't possibly repeat it; the Cultural Revolution cannot fail; the King can't be executed for treason by his own parliament; the general can't cross the Rubicon; Brutus can't betray me.
I think many of those things would have been dismissed the way you're doing now.
2. Tech is changing. I don't expect extinction from a natural pandemic, but from an artificial one is plausible; not from a natural impact event, but artificial is… not yet, but no harder than creating a Mars colony; propaganda has already started multiple genocide attempts, what happens when two independent campaigns are started at the same time when both groups want to genocide everyone not in their group?
The same risks would still be present on Mars, and the only way I see around the deliberate impact risk is space habitats which have their own different set of problems (given we can't coordinate on greenhouse gases I see no chance of us coordinating on Kesler syndrome either in cis-Luna nor in Dyson swarm scenarios).
My money is on the quiet failure mode. The demographic collapses we see happening around the world continue and spread as more people have the resources to live individually, without family. Through automation we overcome the economic issues caused by population inversion, leisure is the norm, ambitions are confined to personal goals, and the human species coasts comfortably down to nothing.
I think that direction will rapidly lead to people of the "Quiverfull" attitude (not necessarily literally in the Christian group of that name) becoming dominant.
> what happens when two independent campaigns are started at the same time when both groups want to genocide everyone not in their group?
In the awful real-world history of genocide, I don’t think “we want to genocide everyone except for ourselves” has ever actually happened. Genocide is always targeted against certain groups, with others left alone. I remember someone here saying that “Nazis wanted to kill all minorities”-but that’s historically false, we all know how they sought to exterminate some minorities, what is far less well-known is how they actually promoted and even improved the rights of others, which they saw much more favourably-such as Frisians and Bretons. “Let’s genocide everyone except for ourselves” is the kind of policy which cartoon Nazis would adopt but no one in the real-world ever has. I suppose something genuinely new could happen, but it doesn’t seem particularly likely-far less likely than the sad near-inevitability of future genocides (of the targeted kind with which we are familiar)
> The notion that we should be colonizing mars because of global warming is the stupidest thing I've ever read or heard.
Right, so you don't understand their arguments then, thanks for clearing that up. Global warming is only an additional reason, not the only or main reason.
Presumably because the goal is to survive something bad that happens to Earth. If you're on Mars (and self-sustaining...), that's no big deal. If you're in the Gobi Desert, you're going to be the first people to get wiped out by whatever happens to Earth.
x-risk is existential risk, as in humans get wiped out. Some big ones are meteor impact, nuclear war and disease. The risk of those things ending all of humanity are greatly reduced with a second planet. They're not reduced with a desert colony.
> The risk of those things ending all of humanity are greatly reduced with a second planet.
I can imagine a situation where that's true. But right now, for almost any situation, a series of super-bunkers is orders of magnitude cheaper and more effective. A lot of ridiculously destructive things can happen to Earth and it will still be a better place to live than Mars.
Yeah you can come to different conclusions than colonizing Mars being a good strategy for human survival. I'm just answering the "why not the desert?" question.
our earth has had impact events that no bunker would save us from. like the one that created the moon or the much smaller impact that created the Borealis Basin on mars would boil the oceans and melt the surface.
Th early solar system was very different with way more debris including large planetismals. The planetismals caused the big impacts you mentioned, and the smaller stuff caused the impacts can see on other bodies.
The solar system is much cleaner place now. All the planetismals and most of the asteroids have impacted or been kicked out. Big things are in stable orbits. There are a lot of dangerous asteroids but we track most of the large ones. There is a risk that something big will be kicked out of orbit but it is rare enough we don’t know how unlikely.
Large impacts, 5km or bigger, are every 20 million years.
It would be massively cheaper and faster to robotically colonize near-Earth space and get really, really good at killer asteroid detection and redirection.
Ok but redirection capability must be abundant enough that outright sabotage and terrorism can be fixed by another nation or else you will have an increase in extinction risk.
None of those things would make Earth less hospitable than Mars. A desert colony would still be better off than trying to survive on Mars, particularly once Earth's resources are cutoff. Mars is far more hostile than anything likely to happen to Earth over the next hundred million years.
It's not about hospitable. It's about survivable. There are large enough meteor strikes where you'd be better off on a self-sustaining Mars colony than anywhere on Earth.
It’s not about Mars being lower risk but independent risk. Someone could decide to keep copies of important documents in their vacation home not because it’s less likely to have a fire, but because it’s less likely for both houses to have a fire.
I used the wrong word when I said meteor. They’re too small. A comet or asteroid of 100km diameter would raise the temperature of the surface of the Earth by hundreds of degrees and then there’d be decades of darkness. https://www.sciencedirect.com/science/article/pii/S001632872...
I would wager that if we get a established research colony on Mars, we're 100 years away from a mostly self-sufficient small city.
We're not going to have Mars colonized in a decade or two, but it's not going to take a thousand years, either. Probably. I'd say thriving colonies within a century or two.
> We have no idea how likely "existential risks" are.
We absolutely do. We have quantified the risks of supervolcanoes, asteroid impacts and coronal mass ejections. We continue to quantify the ongoing risk of climate change and nuclear war (how many minutes to midnight?). The real open questions are the likelihood of a biological agent (natural or engineered), and AI.
The fact that we don't know the risks should make you more worried about those. Unknown unknowns can bite you in the ass hard with no warning. Maybe we should figure out some warning signs.
> We have no idea what, if anything, can be done about them.
That's what research is for. Sounds like we maybe we should fund some research into these issues.
Yep. But if you only need to solve problems in what’s effectively your own science fiction novel, you’ll never fail and don’t have to face the very annoying practical problems that keep stuff from getting done in the real world.
Which is why everyone has a fantastic zombie apocalypse plan but no realistic ideas to to address their local non-zombie violent crime.
> It's not that that bothers me so much as the fact that many effective altruists do it so badly.
I feel the same about Rationalists and rationality. They even had an excellent approach with their motto: "We're only aspiring rationalists", but when you remind them of that motto in the process of them being not actually rational, it has no effect.
There's got to be a way to ~solve something that is so in your face, like right there in an argument, the very essence, but it is a very tricky phenomenon, it always finds a way to slip out of any corner you back it into.
Oh, I agree! I didn't mean to imply that being concerned with the future isn't critically important. It is. I like how you put it better -- it's that they do it so badly.
Far be it from me to second-guess anybody's giving (motes and beams and all that) but this rules out many of the most effective aid organizations, all of which are absolutely off-the-charts obnoxious about fundraising --- because it works.
> many of the most effective aid organizations, all of which are absolutely off-the-charts obnoxious about fundraising
This doesn't seem to jive much with what's reported by charity evaluators like GiveWell, or with what kinds of charitable organizations get grants from more traditional but still high-impact philanthropies like the B&MGF.
It's quite plausible that too much emphasis on fund raising among the general public distorts incentives within these charities and makes them less likely to be highly effective on average. If so, we're better off when the job of publicly raising charitable donations is spun off to separate organizations, such as GiveWell or more generally the EA movement itself.
Fundraising expenses are a huge problem with large charities, but it doesn't follow that fundraising annoyingness is a huge problem. It's not a customer service problem with donors; it's a "using too much of proceeds on fundraising" problem.
If an organization believes spending a marginal dollar of money on their programs is the best way to improve the world, then spending $10 to get $11 in donations allows them to spend an extra dollar on it. It's rational and even morally required. (The only potential negative being the extent that winning a contribution crowds out funding from other causes.)
More generally, people overly emphasize low administrative expenses as a sign of quality. You need overhead to effectively administrate and evaluate programs.
I don't want to get tangled up in abstractions here. To a decent first approximation, every large charity well-reviewed by Charity Navigator (or the like) fundraises from past donors aggressively. It would be a red flag if they weren't annoying previous donors. Empirically, the idea of "never giving money to organizations that ask for money" is likely to steer you away from the most effective aid organizations.
jibe. yes, words change and evolve, but I only mention it because to jive has another meaning, to BS somebody.
I agree with your overall point about "I won't give to groups who tell me they need money" is a pretty high bar to set. However, GP's comment is in keeping with something I've come to think, which is organizations will re-form themselves around your donations (I give large amounts because I can afford them) and they'll befriend you, and it beomes a difficult situation to extricate yourself from. I tend to do one-time gifts and then move on.
Or gibe. The problem is jibe has negative connotations, whereas 'to jive with' seems to me to be a metaphor to works, (I assume it's used in the dancing sense?).
I don't know of the meaning of the phrase has changed somewhat, 'to jibe with' suggests a sarcastic undertone to me, but in modern usage, no sarcasm is intended so maybe jive is the correct term for the current usage of the term 'to jX with'
It looks interesting: "jibe" in the pejorative sense is both an (understandable) alternate spelling of "gibe", and also probably shares a root with "gibe" --- both probably stem from a word that means "rough handling", "kick", or "rear up".
True, that's the point. It's a necessary self-protective stance. Before I adopted that rule (and the secrecy rule), giving money resulted in me being hounded incessantly for money from every other group under the sun. I think I got it worse than many because I tend to give large amounts. It was a nightmare.
Another writer I like, discussing exactly this problem, referred to it as "the quantum unit of sacrifice". It is annoying. Very annoying! But like, that's all it is.
I distinctly remember signing up to give £5 a month to a charity on the condition that if they ever contacted me again asking me for more money I'd immediately cancel my Direct Debit. They didn't even get their second payment.
Honestly, the £5 a month you were giving them is probably less than the cost of special casing you in their largely automated donor marketing systems so this is a net win to both of you.
And yet it's worth it for them to have people canvassing door-to-door for those £5 a month donations with a <10% conversion rate. I guess the difference is that those people don't get paid, and the administrators do.
EDIT: Unless they literally don't care about the base rate donators at all. They only exist so that some of them will get converted into higher rate donators later.
I suppose the argument could be that the charities who aren't savvy enough to play ball are the ones who could use the attention. Small, hyperlocal charities might not have the resources for a dedicated marketer in the first place, and even if they're less "effective" from a global optimization perspective, most people probably get greater utility from donating to local causes.
This is precisely my reasoning. The larger charities don't need me. Also, this is a way, in large part, to give back to my local community -- the people who supported (and support) my business efforts.
There are many ways to measure impact. I choose to measure it locally.
I always give to a local group who directly helps people and who is typically overlooked for charitable giving. I get to know the group pretty well first.
So maybe they are specifically looking for grass roots organisations that do good work but are less able to find raise.
Not a problem. There are lots of trees in the forest. Those folks can go their merry way, while some of the rest of us have different goals, personal as well as noble.
And who's to say 'most effective'. I used to think Red Cross was one of those, until they got caught driving around randomly during Katrina was it? Doing nothing, but spreading their brand. Or so I recall. If I got that wrong I apologize, but my point is bigger means less transparent. For instance, those big effective organizations are spending a butt-ton on fundraising.
Nobody who has paid attention to aid organizations over the last 2 decades believed the Red Cross was effective, just for what it's worth. The Red Cross is practically the entire motivation for sites like Charity Navigator.
> Effective Altruism is just a modern iteration of a thing that's been around for a very long time.
Which you think is what, exactly? I'm under the impression that thing is warmed-over utilitarianism.
> The fundamental idea is sound.
I do not believe utilitarianism is sound, because its logic can be easily used to justify some obviously horrible things. However the framework appeals very strongly to "rationalist" type people.
It's a group of people persuading themselves they're special and entitled, because of course they are, and then trying to sell that line - financially, psychologically, sometimes politically - to themselves and others.
Which is not a new thing in any way at all. The wrapping changes, the psychological games don't.
I have a rule of thumb which is that if you want to understand a movement, organisation, team, social or personal relationship, or any other grouping, the messaging and the stated purpose are largely irrelevant. The real tell is the quality of the relationships - internally, and with outsiders.
If there's a lot of entitlement and rhetorical myth-making + grandiosity happening expect some serious dysfunction, and very likely non-congruence and hypocrisy.
>I do not believe utilitarianism is sound, because its logic can be easily used to justify some obviously horrible things.
Right, but that doesn't mean that we shouldn't care about consequences at all. There's a pretty big gap between "given that we have scarce resources, we should maximize the impact of our use of it" and "committing this atrocity is fine because the utility calculations work out".
The reason EA seems like it's a form of utilitarianism is of course the association with the so-called rationalist community. As you note, it's very appealing to that type of person. This is partly because the math used to rigorously compare consequences seems easy, and partly because utilitarianism has a lot of good places to hide your subjective value judgements.
You can apply EA-like concepts with any sort of consequentialist ethics. E.g. the Rawlsan veil of ignorance can work -- would hypothetical-me rather reduce his chance of dying of malaria by X%, or reduce his chance of malnutrition by Y%? It's just harder to explain why you rank one course of action over another, and therefore you're probably not going to be able to centralize the decision making.
This isn't because it's somehow unsound[0]. It's because it's harder (though not impossible) to explain with math, and the subjective value judgements are right in your face rather than hidden in concepts like utility functions.
[0]- It might be; you might not accept the premise of the veil of ignorance. That's not the reason it seems trickier than the utilitarian version, which has the same problem.
The "10% of lifetime income to charity" pledge is pretty close to Christian tithing, Islamic zakat, and suchlike. Who also claim to be spending donations to help the poorest people in society, and with low waste.
Of course, EA has a bunch of other weird stuff like AI safety, which isn't an idea that's been around for millennia.
When you make that juxtaposition, the idea that you must obey ridiculous rules in order to placate an invisible omnipotent being does seem to have religious analogs.
> I do not believe utilitarianism is sound, because its logic can be easily used to justify some obviously horrible things.
I think most moral philosophers agree with you on that.
IMO, the way people talk about Utilitarianism feels to me exactly the same as my own feelings when at school, having only been taught 2D vectors and basic trig functions, I spent 6 months trying to figure out what it even meant to render in 3D — in particular the sense of pride when I arrived at two useful frameworks (which I later learned were unshaded 3D projection and ray-marching).
By analogy: while that's a good start, there's a long way from that to even Doom/Marathon let alone modern rendering; similarly while Utilitarianism is a good start, it's only saying "good and bad can be quantified and added" and falls over very quickly when you involve even moderately sized groups of spherical-cow-in-a-vacuum people with finite capacity for experiencing utils. It also very definitely can't tell you what you should value, because of the is/ought divide.
Once your model for people is complex enough to admit that some of them are into BDSM, I don't think the original model of simple pluses and minuses can even ascribe a particular util level any more.
Utility can't be one dimensional and it probably isn't linear.
In other words, we would have to treat most situations as unique and then try to find patterns and then the whole rationalism thing goes out of the window.
Only spherical-rationalism-in-a-vacuum goes out the window.
Unfortunately, and this goes back to my previous point, lots of people (I'm not immune!) mistake taking the first baby step for climbing the whole mountain.
I do not believe utilitarianism is sound, because its logic can be easily used to justify some obviously horrible things. However the framework appeals very strongly to "rationalist" type people.
If it sounds horrible, then it's probably is?
The logical chain of hurting a human leading to helping two human doesn't sound like something that is moral or dependable.
Giving to charities that focus on the most severe and urgent problem of humanity is a very straightforward idea.
However, not all charities are focused on the most urgent problems. For example, a local charity I frequent does improv and comedy theater, hardly 'urgent' needs. People don't like to hear is that they could donate money to a third world NGO providing vaccines, or fighting corruption in third world countries instead of their local church or community theater.
Don't get me wrong, community theaters/churches/etc are good things. They just aren't saving lives.
Every ethical theory is a mess of contradictions but throwing ethics out entirely isn’t the right answer. As for hurting someone to help others, sometimes someone needs to kill someone like Hitler for the greater good.
On the subject of charities being annoying with you, there's a simple answer:
Start a donor-advised fund (DAF) with your financial institution. Most have one, and some large charities (e.g. Silicon Valley Charitable Foundation) have their own.
You fund it with cash or appreciated stock. For the latter, you can take a tax deduction that year for the entire market value.
You can't get the money back (that's why it was deductible). You can "advise" that they make a grant to some 501(c)(3) charity, and from my experience they always do, after due diligence.
Here's why they're not annoying: you can give anonymously if you want. They can't bother you because they don't know who you are.
The one I have allows you to name a "successor trustee" who takes over as advisor if I'm gone.
> I never give to any group that asks me for money.
I get how this would keep you personally from being annoyed, but it seems to incentivize in worse outcomes. "Let's collect all the money we can, we never know if we'll get more. Let's grow that reserve" vs. "In a bad month we can get our usual donors/JohnFen to give us his annual donation a little early".
Perhaps, although I haven't seen that. But I also have to deal with the limits of what I can tolerate. That rule make it possible for me to give money. Without it, I wouldn't. I experienced what it was like without that protection, and it's beyond what I could put up with.
It's both possible that the rule both is required for you to give and forces charities into suboptimal strategies. It makes sense to protect your sanity, but not if your intent was to filter charities by need/administration.
I think this is a very, very, very important step. What's more, it's a step that I don't think can be outsourced to someone else, which is why I'm skeptical about claims by, among others, the Effective Altruism movement, to be able to do this kind of thing on your behalf.
>What's more, it's a step that I don't think can be outsourced to someone else, which is why I'm skeptical about claims by, among others, the Effective Altruism movement, to be able to do this kind of thing on your behalf.
Why can't this be done? Society in general outsources due diligence to third parties all the time. Banks outsource credit worthiness assessments to credit bureaus. Passive investors outsource price discovery to other market participants. Online shoppers outsource quality control to reviewers. I agree that there's no substitute for doing it yourself, but it's simply not realistic in many cases to do the due diligence yourself. Even if you do it yourself, there's no guarantee that you'll do a better job than the professionals.
It sounds like a nice idea, but I don't think it's a very practical one.
It drives away anyone who wants to give but is unable or unwilling to devote the kind of time necessary to do this kind of in-depth research (the amount of which only goes up the more money you want to give away).
The problem with the "sacrifice the present for the long term thinking" over ridiculous time scales beyond 100 years is the concept of diminishing returns.
Anything around 3% or more endless annual growth must invent faster than light travel within less than a millenium and then exclusively use it to colonize multiple galaxies.
What this tells you is that we are going to drown in capital within the next two thousand years and probably launch a few space missions toward neighboring solar systems but all of those things are closer to 0% annual growth than 3%.
> Effective Altruism is just a modern iteration of a thing that's been around for a very long time. The fundamental idea is sound.
It's largely a reinvention of a philosophical program called logical positivism, which the philosophers gave up on because it didn't work, so it's not sound. They just brought it back because it's the most STEMy kind of philosophy and they think anything with enough math must be right.
(Example of them not reading the literature is their invented term "steelman", which academia already had and called "reconstructing arguments".)
Effective Altruism is not a reinvention of logical positivism.
Perhaps you mean rationalism (in the internet-rationalist sense; there are others) which is closely associated with EA but not at all the same thing, and is somewhat like logical positivism.
I will believe people claiming these are different things when the EAs stop talking about AI ending the world and other things they read in SF novels and go back to malaria nets.
GiveWell's recommended charities are all doing health interventions in poor countries. The largest category in Good Ventures's grants, by more than 2x, is "global health and development". Malaria nets and the like are, in fact, most of what EAs are doing.
Those things are less controversial and exciting to talk about than (say) whether there's a real danger of AI wrecking the world, so if you focus on what's talked about in the EA Forum or something then you will get a misleading impression.
Yes it's definitely utilitarianism. William McAskill (co-founder of the center for effective altrurism) for example wrote Doing Good Better: Effective Altruism and a Radical Way to Make a Difference and also is a co-author of https://www.utilitarianism.net/ (an introductory online textbook on utilitarianism).
Personally, I started "tithing" when my first business was a success. In part because it's good to help the less fortunate, but also as an ethical stance. Having a business drove home that no business can be successful without the support of the community it starts in, so it's only right to share in the rewards.
So, I give 10% back. I have rules about it:
I always give to a local group who directly helps people and who is typically overlooked for charitable giving. I get to know the group pretty well first.
I never give to any group that won't keep my identity a secret.
I never give to any group that asks me for money.
I don't always give in the form of money. Sometimes, it's in the form of my time and effort, or in material goods, etc.
I don't give to "umbrella" groups whose purpose is fundraising for a collection of other groups. This isn't because I have a problem with them, but because they're not the ones who struggle the most to get donations.