As always, topics like this end up becoming a chance for HN commenters to get on soapboxes.
Origins of some movement or school of thought or whatever will have many threads. I worked in charity fundraising over 20 years ago as one of the first things I did after first getting out of college, and the first organization I am aware of that did public publishing of charity evaluations was GuideStar, founded in 1994. This is the kind of thing that had always been happening in public foundations and government funding agencies, but they tended not to publish or well organize the results such that any individual donor could query. GuideStar largely collected and published data that was legally required to be public but not easy to collate and query, allowing donors to see what proportion of a donation went to programs versus overhead and how effective each charity was at producing the outcomes it was designed to produce. GiveWell went beyond that to making explicit attempts at ranking impact across possible outcomes, judging some to be more important than others.
As I recall from the times, taking this idea to places like Google and hedge funds came from the observation that rich people were giving the most money, but also giving to causes that didn't need the money or weren't really "charitable" by most understanding. Think of Phil Knight almost single handledly turning the University of Oregon into a national football power, places like the Mozilla Foundation or New York Met having chairpersons earning 7 or 8 figure salaries, or the ever popular "give money to get your name on a hospital wing," which usually involves giving money to hospitals that already had a lot of money.
Parallel to that is guys like Singer trying to make a more rationally coherent form of consequentialism that doesn't bias the proximate over the distant.
Eventually, LessWrong latches onto it, it merges with the "earn to give" folks, and decades later you end up with SBF and that becomes the public view of EA.
Fair enough and understandable, but it doesn't mean there were never any good ideas there, and even among rich people, whatever you think of them, I'd say Bill and Melinda Gates helped more with their charity than Phil Knight and the Koch brothers.
To me, the basic problem is people, no matter how otherwise rational they may be, don't deal well with being able to grok directionality without being able to precisely quantify, and morality involves a lot of that. We also don't do well with incommensurate goods. Saving the life of a starving child is probably almost always better than making more art, but that doesn't reduce to we want or should want a world with no art, and GiveWell's attempts at measuring impact in dollars clearly doesn't mean we can just spend $5000 x <number of people who die in an average year> and we can achieve zero deaths, or even just zero from malaria and parasitic worms. These are fuzzy categories that involve uncertain value judgments and moving targets with both diminishingly marginal utility and diminishing marginal effectiveness. Likewise, earning to give clearly breaks down if you imagine a world with nothing but hedge fund managers and no nurses. Funding is important, but someone still has to actually do the work and they're "good" people, too, maybe even better.
In any case, I at least feel confident in stating that becoming a deca-billionaire at all costs, including fraud and crime, so you can helicopter cash onto poor people later in life, is not the morally optimal human pursuit. But I don't know what actually is.
> ...rich people were giving the most money, but also giving to causes that didn't need the money or weren't really "charitable" by most understanding.
How do you figure out which causes need the most money (have "more room for funding", in EA terms) or are "really" charitable by most understanding? You need to rank impact across possible outcomes and judge some more relevant than others, which is what GiveWell and Open Philanthropy Project do.
I'm somewhat confused at the question only becomes it comes across as a defense of GiveWell, which implies I was attacking it, which was not at all intended.
But hoping I'm misreading and engaging anyway, "room for funding" varies in the specifics across domains, but involves some combination of unmet need plus organizational capacity to meet that need. Try not to get hung up on the object-level examples because I have no idea if these are true now or were true in the past, but I think they're close to real examples from 15 years ago or so last time I cared about this. Imagine you've got 50000 people in some equatorial country living in places scurged by malaria. 5000 of them have nets. Some charity exists with the supply chains, connections to manufacturers, and local distributors such that they could easily giving 20000 additional people net, but they simply don't have the money to buy them. Conversely, imagine pancreatic cancer research is in a state whereby there may be plenty of fruitful areas of research not currently being explored, but every person on the planet qualified to conduct such research is 100% booked with whatever it is they're currently doing for at least the next five years. Then it is more effective to donate to the former than the latter, at least for the next five years, at least up to the point that there is still sufficient unmet need and capacity in the former. Again emphasizing that they're not static conditions.
As for "really" charitable, as always, it's a judgment call. I assume most people would find poverty assistance and medical aid to be charitable, but funding college sports not as much, in spite of both qualifying for tax deductions under US tax law. I can't guarantee all people will agree, but somethig like GiveWell is nonetheless premised on the assumption that some outcomes are more morally valuable than others. Curing children of parasites that might kill them or severely impede mental development is more morally valuable than the civic pride and bragging rights of Oregon alumni and the local fan base.
But at the same time, just as I say above I don't think we want a world with no art, I also don't think we want a world with no sports. I can't speak for GiveWell, but certainly I don't think the correct amount of money to donate to amateur sports or art museums is zero. In line with the origin coming from hedge funds, instead of all or nothing thinking like that, we should think in terms of portfolio allocations. Overweight high-QALY early childhood health interventions and underweight adding $10 million to Harvard's $50 billion endowment. Exactly how much? I have no idea. Each person should decide that for themselves, but I still think there's value in bringing up the topic and starting the discussion.
Origins of some movement or school of thought or whatever will have many threads. I worked in charity fundraising over 20 years ago as one of the first things I did after first getting out of college, and the first organization I am aware of that did public publishing of charity evaluations was GuideStar, founded in 1994. This is the kind of thing that had always been happening in public foundations and government funding agencies, but they tended not to publish or well organize the results such that any individual donor could query. GuideStar largely collected and published data that was legally required to be public but not easy to collate and query, allowing donors to see what proportion of a donation went to programs versus overhead and how effective each charity was at producing the outcomes it was designed to produce. GiveWell went beyond that to making explicit attempts at ranking impact across possible outcomes, judging some to be more important than others.
As I recall from the times, taking this idea to places like Google and hedge funds came from the observation that rich people were giving the most money, but also giving to causes that didn't need the money or weren't really "charitable" by most understanding. Think of Phil Knight almost single handledly turning the University of Oregon into a national football power, places like the Mozilla Foundation or New York Met having chairpersons earning 7 or 8 figure salaries, or the ever popular "give money to get your name on a hospital wing," which usually involves giving money to hospitals that already had a lot of money.
Parallel to that is guys like Singer trying to make a more rationally coherent form of consequentialism that doesn't bias the proximate over the distant.
Eventually, LessWrong latches onto it, it merges with the "earn to give" folks, and decades later you end up with SBF and that becomes the public view of EA.
Fair enough and understandable, but it doesn't mean there were never any good ideas there, and even among rich people, whatever you think of them, I'd say Bill and Melinda Gates helped more with their charity than Phil Knight and the Koch brothers.
To me, the basic problem is people, no matter how otherwise rational they may be, don't deal well with being able to grok directionality without being able to precisely quantify, and morality involves a lot of that. We also don't do well with incommensurate goods. Saving the life of a starving child is probably almost always better than making more art, but that doesn't reduce to we want or should want a world with no art, and GiveWell's attempts at measuring impact in dollars clearly doesn't mean we can just spend $5000 x <number of people who die in an average year> and we can achieve zero deaths, or even just zero from malaria and parasitic worms. These are fuzzy categories that involve uncertain value judgments and moving targets with both diminishingly marginal utility and diminishing marginal effectiveness. Likewise, earning to give clearly breaks down if you imagine a world with nothing but hedge fund managers and no nurses. Funding is important, but someone still has to actually do the work and they're "good" people, too, maybe even better.
In any case, I at least feel confident in stating that becoming a deca-billionaire at all costs, including fraud and crime, so you can helicopter cash onto poor people later in life, is not the morally optimal human pursuit. But I don't know what actually is.