Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

but EA is at least doing something about a huge problem we will face.


What are they actually doing? Honest question, because I don't know.


Here are some EA highlights:

* Founding and scaling GiveWell, synthesizing and analyzing global poverty research to make recommendations. This has helped guide about $1B+ in really valuable work, the largest portion in anti-malarial bednets. https://www.givewell.org/about/impact

* Convincing Dustin Moskovitz and Cari Tuna to allocate their giving along these lines, building Open Philanthropy which has made a lot of grants that look really valuable https://www.openphilanthropy.org/grants (this is in addition to the ones where they granted at GiveWell's recommendation -- don't want to double count).

* Michael Kremer won a Nobel prize in econ (shared with Esther Duflo and Abhijit Banerjee who I'd describe as EA-sympathetic but probably don't consider themselves EAs?) for global poverty research.

* Built an "AI Safety" subfield of computer science, with the 2016 paper "Concrete problems in AI safety" having 1.8k citations.

* Founding and scaling 80,000 Hours, helping people figure out how to use their careers to do more good. This includes looking into potential career opportunities, writing up public advice, and 1:1 advising, and 1k+ people have cited them as part of why they changed what they were working on. https://80000hours.org

(Not especially representative; there's been thousands of people working along these lines for years and I'm sure I'm missing a lot of valuable work.)


Purchasing large mansions[0], chateaus[1], and spending lavishly on administrative kickbacks[2] because longtermism can justify any present action if you extend your horizon far enough. EA is effectively two disconnected ideas at this point.

[0] https://forum.effectivealtruism.org/posts/xof7iFB3uh8Kc53bG/...

[1]https://www.forbes.com/sites/sarahemerson/2023/02/28/effecti...

[2]https://intelligence.org/wp-content/uploads/2022/12/Independ...


> spending lavishly on administrative kickbacks

I just skimmed the report you link and nothing jumped out at me as matching this -- could you be more specific?


we _will_ face? Are we forecasting AGI as a given?


Personally, I'm forecasting mass human disassociation/suffering as a result of pursuing AGI as a given. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: