Difference between revisions of "Effective altruism"

From RationalWiki
Jump to navigation Jump to search
m (Reverted edits by 131.111.184.92 (talk) to last revision by ClickerClock)
(141 intermediate revisions by 46 users not shown)
Line 1: Line 1:
{{cquote|When I give food to the poor, they call me a saint. When I ask why they are poor, they call me a communist.|||[[wp:Hélder Câmara|Hélder Câmara]]}}
+
{{philosophy}}
 +
{{cquote|When I give food to the poor, they call me a saint. When I ask why they are poor, they call me a [[communist]].|||{{wpl|Hélder Câmara}}}}
 +
'''Effective altruism''' is a movement to change the world through making carefully-targeted charitable donations &mdash; not ''only'' through making carefully-targeted charitable donations, but that is the overwhelming focus. Philosopher [[Peter Singer]] started the idea and buys into it big time, pushing it hard wherever he goes.<ref>http://www.ted.com/talks/peter_singer_the_why_and_how_of_effective_altruism.html</ref> Effective Altruism is also pushed by [[San Francisco|Bay Area]] [[libertarianism#"Techno-libertarians"|technolibertarians]], and artificial intelligence [[existential risk]] groups, including {{wpl|Machine_Intelligence_Research_Institute|MIRI}}. The latter, of course, consider themselves an obvious beneficiary &mdash; if not ''the'' obvious beneficiary.
  
'''Effective altruism''' is a movement to change the world through making carefully-targeted charitable donations, and ''only'' through making carefully-targeted charitable donations. Although it has a very definite point, in that some "[[charity|charities]]" spend more on the leaders than the people they are supposed to help, direct intervention will almost never solve systemic problems. Effective Altruism is therefore pushed very hard by Bay Area [[libertarianism#"Techno-libertarians"|technolibertarians]] who think that the world is set up pretty well and don't really want to see any major social changes towards equality that would diminish their own positions. [[LessWrong]] also pushes the concept heavily, seeing itself as the obvious beneficiary.
+
The sales pitch is that, if you're going to try make the world a better place for other people, you should try to do the best possible job you can. If you had the choice between helping a local community theater group put on a show or [[Think of the children|saving African children from from malaria]], the right thing to do is, of course, to save the children. ([[Overpopulation|We think.]]) People face dilemmas like this in real life whenever they donate money to charity: if you're not donating to the most cost-effective charities that you can, you fail at [[utilitarianism]]. (It's impossible '''not''' to fail at utilitarianism, but you can fail less hard.)
  
Philosopher [[Peter Singer]] started the idea and buys into it big time.<ref>http://www.ted.com/talks/peter_singer_the_why_and_how_of_effective_altruism.html</ref> However, the Singer brochure version &mdash; a categorical imperative for pretty much ''every'' comfortable first-world citizen, with the specific aim of systemic changes &mdash; and the actual manifestation are not the same at all.
+
It is important to remember that EA invented neither the concept of charity, nor the concept of evaluating charities  &mdash; though some EAs behave as though they did. Beware of EAs equivocating by responding to criticisms of the EA subculture's behaviours with advocacy of the value of charity or evaluating charities in general.
  
Like other movements whose names are lies the advocates tell themselves ("race realism", "traditional marriage"), EA is not quite all that. In practice, it consists of well-off libertarians congratulating each other on what wonderful human beings they are for working rapacious shitweasel jobs but choosing their charities well, but ''never in any way'' questioning the system that the problems are in the context of.
+
== How EAs evaluate charities ==
 +
In the ideal case, EAs leave actually evaluating how good charities are to dedicated organisations set up for that purpose - charity evaluators, such as GiveWell and Giving What We Can (GWWC). GiveWell and GWWC tend to rate charities in a quasi-utilitarian way, using a combination of the best available published evidence for the interventions, and asking lots of questions of the charities they rate relating to things like checking that the interventions actually work (auditing), room for more funding, and whether adding more funding would do the same amount of good, more good, or less good. Overhead is also considered: however, overhead is not regarded as a terrible thing if it invests the effectiveness of the work (monitoring programs being a notable example). They are preferred by EAs over existing charity evaluators such as Charity Navigator - Charity Navigator just looks at the percentage a charity spends on administrative and fundraising overheads and pays no attention to whether what the charity is doing is effective, or how effective it is.
  
The idea of EA is that utilitarianism is true (and you can do arithmetic on it with meaningful results), that all lives (or Quality-Adjusted Life Years) are equivalent (so those poor people in Africa are equivalent to the comfortable first-world donor) and that some charities do better at this than others. Thus, it should be theoretically possible to run the numbers and see which is objectively the most effective charity per dollar donated; and to offset the horrible things your job does to people in your own country with charitable donations to other countries. It's like buying "asshole offsets".<ref name=replaceability/>
+
Of course, then there's donations to MIRI, but MIRI appear to be special-cased for subcultural reasons.
  
The trouble is that EA is a mechanism to push the libertarian idea that charity is a replacement for government action or funding. Individual charity has nothing like the funding or effectiveness of concerted government action &mdash; but EA sustains the myth that individual charity is ''the most effective way to help the world''. EA proponents will frequently be seen excusing their choice to work completely fucking evil jobs because they're so charitable, and disparaging the foolish people who actually work on the ground at the charity for their ineffectiveness compared to the power of the donors.<ref name=replaceability>Todd, Benjamin. [http://www.academia.edu/1807196/Which_Ethical_Careers_Make_a_Difference_The_Replaceability_Issue_in_the_Ethics_of_Career_Choice Which ethical careers make a difference? The replaceability issue in the ethics of career choice] (Master's thesis)</ref>
+
===GiveWell===
 +
However, GiveWell has partnered with billionaire Facebook co-founder Dustin Moscovitz and his wife's charitable foundation in a joint initiative called the Open Philanthropy Project, and in this initiative they have been accused of casting aside their analytical rigour in favour of recommending, in some cases, politically liberal charities (presumably) already favoured by the Moscovitzs.
  
The ideas have been around a while, but the current subculture that calls itself Effective Altruism got a big push from LessWrong. Who considered themselves ''obviously'' the most effective charity in the world<ref>They claim <capture>[http://lesswrong.com/lw/6w3/the_125000_summer_singularity_challenge/4krk "8 lives saved per dollar donated".]</capture></ref> until EA charity guide GiveWell rated donations to them as actually ''worse'' for their project (the threat to humanity of Artificial Intelligence) than not donating.<ref>[http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/ Thoughts on the Singularity Institute (SI)] (Holden Karnovsky, LessWrong, 11 May 2012)</ref>
+
GiveWell has also recommended that people spam the Against Malaria Foundation (AMF) with all<ref>except if they are billionaires, obviously</ref> the money they have set aside to donate, on the grounds that they think it's the best charity, even at the risk of exhausting the AMF's room for more funding, amongst other dubious decisions.
  
They're probably making the world a better place than they would spending ''all'' their money on blow. But that doesn't mean you should take seriously their claims of virtue, and ''particularly'' not their claims of being an example to emulate.
+
Effective altruists have criticized GiveWell for being too strict in their criteria which leaves out UNICEF out of their list of recommended charities because UNICEF focuses on too many interventions which makes it harder for GiveWell to evaluate their effectiveness.<ref>[https://forum.effectivealtruism.org/posts/pNFp7PsWLHYWCqW9s/where-i-m-giving-and-why-eric-friedman Where I'm giving and why:  Eric Friedman] ''Effective Altruism Forum''</ref> This is despite the fact that UNICEF engages in many cost-effective interventions such as providing vaccines.
  
==Footnotes==
+
== Origins ==
 +
The philosophical underpinnings mostly come from philosopher Peter Singer, particularly his 1972 essay {{wpl|Famine, Affluence, and Morality|''Famine, Affluence, and Morality''}}. He argues in this essay that affluent people are morally obligated to donate far more of their income to humanitarian causes than is considered normal in Western culture. This did not start the effective altruism subculture, but once it was going he joined in enthusiastically.
 +
 
 +
The effective altruism subculture &mdash; as opposed to the concept of altruism that is effective &mdash; originated around [[LessWrong]].<ref name="effective-altruism.com history"/> The earliest known use of the term was in the form "effectively altruistic" by user "Anand" in a 2003 edit on the wiki of the [[singularity|singularitarian]] Shock Level 4 mailing list, a predecessor of LessWrong run by [[Eliezer Yudkowsky]].<ref>http://sl4.org/wiki/action=history&id=EffectiveAltruism ([https://archive.is/Af8Um archive])</ref> Anand's article argued that donating to the Singularity Institute (now known as MIRI) is more effective than donating to prevent the spread of HIV/AIDS, even though the latter may be more emotionally compelling. Later, the term was used in the form "effective altruist" by Yudkowsky himself in his 2007 blog post ''Scope Insensitivity'', arguing against sentimentality and for [[utilitarian]] calculation in charity:<ref>http://lesswrong.com/lw/hw/scope_insensitivity/</ref>
 +
{{quotebox| If you want to be an effective altruist, you have to think it through with the part of your brain that processes those unexciting inky zeroes on paper, not just the part that gets real worked up about that poor struggling oil-soaked bird.}}
 +
 
 +
Other names were used, ''e.g.'' "efficient charity" in 2010,<ref>https://www.lesswrong.com/posts/FCxHgPsDScx4C3H8n/efficient-charity</ref> but the movement eventually settled on the name "effective altruism" by 2012.<ref>http://www.jefftk.com/p/a-name-for-a-movement</ref><ref name="effective-altruism.com history">http://effective-altruism.com/ea/5w/the_history_of_the_term_effective_altruism/</ref>
 +
 
 +
== Earning to give ==
 +
People that call themselves effective altruists commonly endorse the "earning to give" approach, at least for those who have, or might be able to get, well-paid jobs. At its most hardcore, "earning to give" means getting the highest-paying job you can and then donating as much of it as possible (up to some threshold, for sanity's sake). After all, you can get more done by paying a bunch of other people to solve problems for you than you can do all on your own, right?<ref>http://80000hours.org/earning-to-give</ref>
 +
 
 +
In practice, people will not always take (or keep) the highest-paying job they can, for a variety of reasons including commute time, company culture, working hours, the employer's attitude to diversity, work-related stress, and whether the management are perceived to treat employees well or badly. However, 80,000 Hours, an organisation dedicated to giving career advice to wannabe effective altruists, published a blog post claiming that research showed that, depending on the type of stress, stress at work wasn't necessarily a big deal anyway and in some cases, people should consider just sucking it up and maybe "reframe stress as opportunity", in the interests of saving more children from malaria.<ref name='80,000-stress'>[https://80000hours.org/2016/02/should-you-look-for-a-low-stress-job/]</ref>
 +
 
 +
Also, in practice nobody ''literally'' donates "as much as possible," an unrealistic standard which would presumably mean forgoing any kind of luxuries and a curtailed social life, at least after securing a long-term relationship — and which would still leave the awkward question of whether one's kids should be brought up in near-poverty. (The powerful human instinct towards protection of one's offspring would tend to mitigate against such thinking when it came down to it - and if not, there's always [[Child_Protective_Services|social services]].) One EA organisation, Giving What We Can, promotes a suggested amount of 10% of one's entire working lifetime income, spread over a working lifetime. Although this is easily achievable by generously-compensated Bay Area software engineers, and (as even Giving What We Can recognises) not achievable by students struggling to get by on student loans, some in the movement seem curiously blind to the fact that not everyone who has a job might be able to part with 10% of their entire income. Some &mdash; not all of them millionaires &mdash; even pledge to give ''much more'' than 10% of their income. It is unclear whether this behaviour is, on balance, inspirational, or whether it acts to drive away potential donors, activists and charity workers who might feel that this is a movement of exclusively privileged people that is remote from their lives and concerns.
 +
 
 +
Compounding the problem, effective altruism is regularly conflated, even inside the movement, with:
 +
* Giving What We Can, even though not all people who identify as "effective altruists" have pledged to donate 10% of their income or are planning to do so
 +
* [[Utilitarianism]], even though not all effective altruists are utilitarians
 +
* Supporting everything that everyone in the movement does, even though that would be arguably self-contradictory (see below)
 +
 
 +
EA organisations regularly conduct research into what brings people into the EA movement, but no formal research seems to have been done into what ''drives some people away from EA''. The thinking of many EAs is that effective altruism is so ''obviously right'', only people who were somehow in fundamental disagreement with EA values like doing nice things, and doing more and better things rather than fewer and worse things, would even consider not joining the movement...
 +
 
 +
==Mosquito nets versus AI risk==
 +
The ideas have been around a while, but the current subculture that calls itself Effective Altruism got a big push from {{wpl|Machine_Intelligence_Research_Institute|MIRI}} and its friends in the [[LessWrong]] community, many of whom considered MIRI ''obviously'' the most effective charity in the world.<ref>They claim <capture>[http://lesswrong.com/lw/6w3/the_125000_summer_singularity_challenge/4krk "8 lives saved per dollar donated".]</capture></ref> However, unfortunately for MIRI, EA charity guide GiveWell subsequently rated donations to them as actually ''worse'' for their project (addressing the [[Cybernetic revolt|threat to humanity]] posed by hypothetical future advanced Artificial Intelligence technology) than not donating, with GiveWell's Holden Karnovsky stating in 2012 that "I do not believe that these objections constitute a sharp/tight case for the idea that SI's work has low/negative value; I believe, instead, that SI's own arguments are too vague for such a rebuttal to be possible." GiveWell, unlike LessWrong and the Machine Intelligence Research Institute, primarily promotes charities focused on improving health in the developing world. GiveWell's criticism of MIRI argued that MIRI's focus on supposedly trying to save the world and create "Friendly AI" amounted to a form of [[Pascal's Wager|Pascal's Mugging]] &mdash; promising enormous benefits, even though the probability of actually receiving those benefits is tiny.<ref>[http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/ Thoughts on the Singularity Institute (SI)] (Holden Karnovsky, LessWrong, 11 May 2012)</ref><ref>Bostrom, Nick [http://www.nickbostrom.com/papers/pascal.pdf Pascal's Mugging]</ref>
 +
 
 +
This is not the only example. Reducing animal suffering is an important cause for a significant minority(?) of the movement,<ref>Muehlhauser, Luke [http://www.effective-altruism.com/four-focus-areas-effective-altruism/ Four focus areas of effective altruism.]</ref> but some people have unusual ideas on how to do this. One prominent effective altruist has put up for discussion on his blog the idea of ''destroying nature'' in order to reduce wild animal suffering.<ref>Wiblin, Robert. [http://robertwiblin.com/2010/01/21/just-destroy-nature/ Why improve nature when destroying it is so much easier?]</ref> In fact, some members of the Effective Altruism movement identify as "negative utilitarians", meaning that preventing suffering is the only thing that matters. However, this philosophy seems to imply that we should be willing to destroy the entire world to prevent one person from suffering a pinprick.<ref>Ord, Toby. [http://www.amirrorclear.net/academic/ideas/negative-utilitarianism/ Why I'm Not a Negative Utilitarian]</ref>
 +
 
 +
Despite the many and varied differences of opinion within the EA movement, those that remain in the movement tend not to spend too much time arguing about fundamental "cause selection" issues (whether to donate to AI risk, global health, poverty or animal causes) - and even when they do, such discussions tend to remain relatively civil and non-rancorous. Part of the reason for this is that all EAs are in favour of "growing the pie" of EA supporters at this point in time, and most of them recognise that rancorous discussions would impede that goal. Although ideas about targeting growth differently have been mooted, such as focusing more on trying to recruit the rich (by hard-headed pragmatists) or women and ethnic minorities (by [[Social_justice_warrior|social justice people]]) or people who don't speak English (by people who think outside the English-speaking world), no-one is so pessimistic about their favoured EA cause area that they think that growing the pie won't gain their preferred cause area more EA recruits.
 +
 
 +
However, one EA has argued that this polite truce doesn't make sense, because if people think their cause is vastly better, they should be spending a lot of their time trying to persuade people of that.<ref name='ozy'>[https://thingofthings.wordpress.com/2016/02/17/concerning-miris-place-in-the-ea-movement/]</ref> [[Scott Alexander]] has counter-argued, based on his extensive personal (and often unsuccessful) experience of arguing with people who are sceptical about AI risk as a cause, that repeated arguments of this kind at EA meetups would be tiring, repetitive, and unpleasant.<ref name='ozy'/> This is not to say that Alexander does not advocate for AI risk reduction &mdash; however, he prefers to write long blog posts where he can assemble his arguments and evidence and engage in an extensive, uninterrupted written monologue.<ref>[http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/ Example]</ref>
 +
 
 +
==Where "Effective Altruists" actually send their money==
 +
According to William MacAskill of "The Effective Altruism Blog"<ref>http://www.effective-altruism.com/what-effective-altruism/</ref>, effective altruists currently tend to think that the most important causes to focus on are global [[poverty]], [[Vegetarianism|factory farming]], and the [[Doomsday scenario|long-term future of life on Earth]]. In practice, this amounts to [[Not as bad as|complaining when people try to solve local problems]], feeling bad when people eat hamburgers<ref>EAs also disapprove of donating to train guide dogs (a.k.a. seeing-eye dogs in American English), although for completely different reasons to [[PETA]]. While PETA opposes the very concept of guide dogs because they believe that animals should not be owned, EAs note the high cost of training guide dogs compared to performing eye surgeries in poor countries, to actually many people of blindness for the same cost as a guide dog.</ref>, and sending money to [[Eliezer Yudkowsky]], respectively.
 +
 
 +
The effective-altruism.com ''2014 Survey of Effective Altruists'' was self-selected, therefore statistically bogus, but includes a list of how many respondents said they donated to various organisations:<ref>[http://effective-altruism.com/ea/gb/the_2014_survey_of_effective_altruists_results/ Blog post], [https://eahub.org/sites/effectivealtruismhub.com/files/survey/2014/results-and-analysis.pdf full results PDF]</ref>
 +
 
 +
*Against Malaria Foundation: 211
 +
*The Humane League: 22
 +
*Schistosomiasis Control Initiative: 114
 +
*80,000 Hours: 21
 +
*GiveDirectly: 101
 +
*Project Healthy Children: 16
 +
*Machine Intelligence Research Institute: 77
 +
*Centre for Effective Altruism: 14
 +
*GiveWell: 46
 +
*Giving What We Can: 10
 +
*CFAR: 45
 +
*Animal Charity Evaluators: 10
 +
*Deworm the World: 43
 +
*Leverage Research: 7
 +
*Vegan Outreach: 27
 +
 
 +
CFAR is the Center for Applied Rationalism, another LessWrong-subculture organisation; at this time its mission was to promote rationality techniques, it repivoted in late 2016 to being another AI risk organisation.
 +
 
 +
Leverage Research is a separate rationality organisation which has received funding from billionaire [[Peter Thiel]], and is "dedicated to researching the human mind and group dynamics."
 +
 
 +
Several of the organisations are EA organisations.
 +
 
 +
==Risks==
 +
[[Robin Hanson]] points out that people accumulate knowledge and wisdom as they get older and may change their minds about important things as a result. He therefore advises effective altruists - those of them who are young, anyway, which is most of them - to do nothing now and save money for later, because they might change their mind about where to give that money. Hanson, who specialises in the study of allegedly hypocritical human behaviour ("X is not really about Y!") argues that effective altruists are prone to irrationally give now rather than later, to [[signal]] sincerity to their fellow effective altruists. When asked whether it is not better to give now while we still can, because our future selves might spend the money on e.g. putting our children through university, he responded "Maybe that's the right thing to do! Why do you distrust your future self so much?"<ref>Robin Hanson, talk at King's College London entitled "Robin Hanson on: Effective Altruism, Betting, Robots & More", 20 March 2016</ref>
 +
 
 +
Like activism and do-gooding generally, for high-{{wpl|scrupulosity}} people, going overboard with EA can be dangerous. It can lead to {{wpl|burnout (psychology)|burnout}} from overwork and/or neglecting your own needs and/or those of your family. It's worth remembering that, "effective" as it may be to buy a bed net for a child in Africa, people close to you also have needs of various sorts, which can often most "effectively" be met by you.
 +
 
 +
A significant number of EAs advocate giving large portions (10%+) of your income away on a continuous basis, but it is important to remember that your circumstances may change &mdash; for example, you may lose your job or encounter a health crisis &mdash; so it is worthwhile considering saving some money in case you need it. You can always give away that saved money later, or change your mind if you decide you really need that money yourself.
 +
 
 +
Excessive moralising about EA can also cause you to &mdash; like a kind of inverse {{wpl|How to Win Friends and Influence People|Dale Carnegie}} &mdash; lose friends and fail to influence people. Arguably, persuading other people to give to good causes is best approached in an upbeat "look what you could achieve" way, rather than trying to guilt-trip people. (The latter is probably more likely to work on people who were already high-scrupulosity and thus more susceptible to EA ideas in the first place &mdash; so the value of "converting" such people to EA by guilt-tripping them could well be less than you might think, because they might have ended up being converted anyway.)
 +
 
 +
In the (unlikely) worst case scenario, you could lose all your non-EA friends through being seen as extremely preachy and arrogant, then later become financially ruined through a chance accident or illness leaving you unable to work, have no savings to fall back on &mdash; and then receive no help whatsoever from your EA friends despite all the past good you have done, because helping you is not an "effective" cause. This scenario is probably unlikely to pan out this way in practice though. Probably.
 +
 
 +
==See also==
 +
{{fun}}
 +
*[[Pascal's wager]]
 +
 
 +
==References==
 
{{reflist}}
 
{{reflist}}
  
 
[[Category:LessWrong]]
 
[[Category:LessWrong]]
 
[[Category:Libertarianism]]
 
[[Category:Libertarianism]]

Revision as of 09:00, 24 May 2019

Thinking hardly
or hardly thinking?

Philosophy
Icon philosophy.svg
Major trains of thought
The good, the bad,
and the brain fart
Come to think of it
When I give food to the poor, they call me a saint. When I ask why they are poor, they call me a communist.
Hélder CâmaraWikipedia

Effective altruism is a movement to change the world through making carefully-targeted charitable donations — not only through making carefully-targeted charitable donations, but that is the overwhelming focus. Philosopher Peter Singer started the idea and buys into it big time, pushing it hard wherever he goes.[1] Effective Altruism is also pushed by Bay Area technolibertarians, and artificial intelligence existential risk groups, including MIRIWikipedia. The latter, of course, consider themselves an obvious beneficiary — if not the obvious beneficiary.

The sales pitch is that, if you're going to try make the world a better place for other people, you should try to do the best possible job you can. If you had the choice between helping a local community theater group put on a show or saving African children from from malaria, the right thing to do is, of course, to save the children. (We think.) People face dilemmas like this in real life whenever they donate money to charity: if you're not donating to the most cost-effective charities that you can, you fail at utilitarianism. (It's impossible not to fail at utilitarianism, but you can fail less hard.)

It is important to remember that EA invented neither the concept of charity, nor the concept of evaluating charities — though some EAs behave as though they did. Beware of EAs equivocating by responding to criticisms of the EA subculture's behaviours with advocacy of the value of charity or evaluating charities in general.

How EAs evaluate charities

In the ideal case, EAs leave actually evaluating how good charities are to dedicated organisations set up for that purpose - charity evaluators, such as GiveWell and Giving What We Can (GWWC). GiveWell and GWWC tend to rate charities in a quasi-utilitarian way, using a combination of the best available published evidence for the interventions, and asking lots of questions of the charities they rate relating to things like checking that the interventions actually work (auditing), room for more funding, and whether adding more funding would do the same amount of good, more good, or less good. Overhead is also considered: however, overhead is not regarded as a terrible thing if it invests the effectiveness of the work (monitoring programs being a notable example). They are preferred by EAs over existing charity evaluators such as Charity Navigator - Charity Navigator just looks at the percentage a charity spends on administrative and fundraising overheads and pays no attention to whether what the charity is doing is effective, or how effective it is.

Of course, then there's donations to MIRI, but MIRI appear to be special-cased for subcultural reasons.

GiveWell

However, GiveWell has partnered with billionaire Facebook co-founder Dustin Moscovitz and his wife's charitable foundation in a joint initiative called the Open Philanthropy Project, and in this initiative they have been accused of casting aside their analytical rigour in favour of recommending, in some cases, politically liberal charities (presumably) already favoured by the Moscovitzs.

GiveWell has also recommended that people spam the Against Malaria Foundation (AMF) with all[2] the money they have set aside to donate, on the grounds that they think it's the best charity, even at the risk of exhausting the AMF's room for more funding, amongst other dubious decisions.

Effective altruists have criticized GiveWell for being too strict in their criteria which leaves out UNICEF out of their list of recommended charities because UNICEF focuses on too many interventions which makes it harder for GiveWell to evaluate their effectiveness.[3] This is despite the fact that UNICEF engages in many cost-effective interventions such as providing vaccines.

Origins

The philosophical underpinnings mostly come from philosopher Peter Singer, particularly his 1972 essay Famine, Affluence, and MoralityWikipedia. He argues in this essay that affluent people are morally obligated to donate far more of their income to humanitarian causes than is considered normal in Western culture. This did not start the effective altruism subculture, but once it was going he joined in enthusiastically.

The effective altruism subculture — as opposed to the concept of altruism that is effective — originated around LessWrong.[4] The earliest known use of the term was in the form "effectively altruistic" by user "Anand" in a 2003 edit on the wiki of the singularitarian Shock Level 4 mailing list, a predecessor of LessWrong run by Eliezer Yudkowsky.[5] Anand's article argued that donating to the Singularity Institute (now known as MIRI) is more effective than donating to prevent the spread of HIV/AIDS, even though the latter may be more emotionally compelling. Later, the term was used in the form "effective altruist" by Yudkowsky himself in his 2007 blog post Scope Insensitivity, arguing against sentimentality and for utilitarian calculation in charity:[6]

If you want to be an effective altruist, you have to think it through with the part of your brain that processes those unexciting inky zeroes on paper, not just the part that gets real worked up about that poor struggling oil-soaked bird.

Other names were used, e.g. "efficient charity" in 2010,[7] but the movement eventually settled on the name "effective altruism" by 2012.[8][4]

Earning to give

People that call themselves effective altruists commonly endorse the "earning to give" approach, at least for those who have, or might be able to get, well-paid jobs. At its most hardcore, "earning to give" means getting the highest-paying job you can and then donating as much of it as possible (up to some threshold, for sanity's sake). After all, you can get more done by paying a bunch of other people to solve problems for you than you can do all on your own, right?[9]

In practice, people will not always take (or keep) the highest-paying job they can, for a variety of reasons including commute time, company culture, working hours, the employer's attitude to diversity, work-related stress, and whether the management are perceived to treat employees well or badly. However, 80,000 Hours, an organisation dedicated to giving career advice to wannabe effective altruists, published a blog post claiming that research showed that, depending on the type of stress, stress at work wasn't necessarily a big deal anyway and in some cases, people should consider just sucking it up and maybe "reframe stress as opportunity", in the interests of saving more children from malaria.[10]

Also, in practice nobody literally donates "as much as possible," an unrealistic standard which would presumably mean forgoing any kind of luxuries and a curtailed social life, at least after securing a long-term relationship — and which would still leave the awkward question of whether one's kids should be brought up in near-poverty. (The powerful human instinct towards protection of one's offspring would tend to mitigate against such thinking when it came down to it - and if not, there's always social services.) One EA organisation, Giving What We Can, promotes a suggested amount of 10% of one's entire working lifetime income, spread over a working lifetime. Although this is easily achievable by generously-compensated Bay Area software engineers, and (as even Giving What We Can recognises) not achievable by students struggling to get by on student loans, some in the movement seem curiously blind to the fact that not everyone who has a job might be able to part with 10% of their entire income. Some — not all of them millionaires — even pledge to give much more than 10% of their income. It is unclear whether this behaviour is, on balance, inspirational, or whether it acts to drive away potential donors, activists and charity workers who might feel that this is a movement of exclusively privileged people that is remote from their lives and concerns.

Compounding the problem, effective altruism is regularly conflated, even inside the movement, with:

  • Giving What We Can, even though not all people who identify as "effective altruists" have pledged to donate 10% of their income or are planning to do so
  • Utilitarianism, even though not all effective altruists are utilitarians
  • Supporting everything that everyone in the movement does, even though that would be arguably self-contradictory (see below)

EA organisations regularly conduct research into what brings people into the EA movement, but no formal research seems to have been done into what drives some people away from EA. The thinking of many EAs is that effective altruism is so obviously right, only people who were somehow in fundamental disagreement with EA values like doing nice things, and doing more and better things rather than fewer and worse things, would even consider not joining the movement...

Mosquito nets versus AI risk

The ideas have been around a while, but the current subculture that calls itself Effective Altruism got a big push from MIRIWikipedia and its friends in the LessWrong community, many of whom considered MIRI obviously the most effective charity in the world.[11] However, unfortunately for MIRI, EA charity guide GiveWell subsequently rated donations to them as actually worse for their project (addressing the threat to humanity posed by hypothetical future advanced Artificial Intelligence technology) than not donating, with GiveWell's Holden Karnovsky stating in 2012 that "I do not believe that these objections constitute a sharp/tight case for the idea that SI's work has low/negative value; I believe, instead, that SI's own arguments are too vague for such a rebuttal to be possible." GiveWell, unlike LessWrong and the Machine Intelligence Research Institute, primarily promotes charities focused on improving health in the developing world. GiveWell's criticism of MIRI argued that MIRI's focus on supposedly trying to save the world and create "Friendly AI" amounted to a form of Pascal's Mugging — promising enormous benefits, even though the probability of actually receiving those benefits is tiny.[12][13]

This is not the only example. Reducing animal suffering is an important cause for a significant minority(?) of the movement,[14] but some people have unusual ideas on how to do this. One prominent effective altruist has put up for discussion on his blog the idea of destroying nature in order to reduce wild animal suffering.[15] In fact, some members of the Effective Altruism movement identify as "negative utilitarians", meaning that preventing suffering is the only thing that matters. However, this philosophy seems to imply that we should be willing to destroy the entire world to prevent one person from suffering a pinprick.[16]

Despite the many and varied differences of opinion within the EA movement, those that remain in the movement tend not to spend too much time arguing about fundamental "cause selection" issues (whether to donate to AI risk, global health, poverty or animal causes) - and even when they do, such discussions tend to remain relatively civil and non-rancorous. Part of the reason for this is that all EAs are in favour of "growing the pie" of EA supporters at this point in time, and most of them recognise that rancorous discussions would impede that goal. Although ideas about targeting growth differently have been mooted, such as focusing more on trying to recruit the rich (by hard-headed pragmatists) or women and ethnic minorities (by social justice people) or people who don't speak English (by people who think outside the English-speaking world), no-one is so pessimistic about their favoured EA cause area that they think that growing the pie won't gain their preferred cause area more EA recruits.

However, one EA has argued that this polite truce doesn't make sense, because if people think their cause is vastly better, they should be spending a lot of their time trying to persuade people of that.[17] Scott Alexander has counter-argued, based on his extensive personal (and often unsuccessful) experience of arguing with people who are sceptical about AI risk as a cause, that repeated arguments of this kind at EA meetups would be tiring, repetitive, and unpleasant.[17] This is not to say that Alexander does not advocate for AI risk reduction — however, he prefers to write long blog posts where he can assemble his arguments and evidence and engage in an extensive, uninterrupted written monologue.[18]

Where "Effective Altruists" actually send their money

According to William MacAskill of "The Effective Altruism Blog"[19], effective altruists currently tend to think that the most important causes to focus on are global poverty, factory farming, and the long-term future of life on Earth. In practice, this amounts to complaining when people try to solve local problems, feeling bad when people eat hamburgers[20], and sending money to Eliezer Yudkowsky, respectively.

The effective-altruism.com 2014 Survey of Effective Altruists was self-selected, therefore statistically bogus, but includes a list of how many respondents said they donated to various organisations:[21]

  • Against Malaria Foundation: 211
  • The Humane League: 22
  • Schistosomiasis Control Initiative: 114
  • 80,000 Hours: 21
  • GiveDirectly: 101
  • Project Healthy Children: 16
  • Machine Intelligence Research Institute: 77
  • Centre for Effective Altruism: 14
  • GiveWell: 46
  • Giving What We Can: 10
  • CFAR: 45
  • Animal Charity Evaluators: 10
  • Deworm the World: 43
  • Leverage Research: 7
  • Vegan Outreach: 27

CFAR is the Center for Applied Rationalism, another LessWrong-subculture organisation; at this time its mission was to promote rationality techniques, it repivoted in late 2016 to being another AI risk organisation.

Leverage Research is a separate rationality organisation which has received funding from billionaire Peter Thiel, and is "dedicated to researching the human mind and group dynamics."

Several of the organisations are EA organisations.

Risks

Robin Hanson points out that people accumulate knowledge and wisdom as they get older and may change their minds about important things as a result. He therefore advises effective altruists - those of them who are young, anyway, which is most of them - to do nothing now and save money for later, because they might change their mind about where to give that money. Hanson, who specialises in the study of allegedly hypocritical human behaviour ("X is not really about Y!") argues that effective altruists are prone to irrationally give now rather than later, to signal sincerity to their fellow effective altruists. When asked whether it is not better to give now while we still can, because our future selves might spend the money on e.g. putting our children through university, he responded "Maybe that's the right thing to do! Why do you distrust your future self so much?"[22]

Like activism and do-gooding generally, for high-scrupulosityWikipedia people, going overboard with EA can be dangerous. It can lead to burnoutWikipedia from overwork and/or neglecting your own needs and/or those of your family. It's worth remembering that, "effective" as it may be to buy a bed net for a child in Africa, people close to you also have needs of various sorts, which can often most "effectively" be met by you.

A significant number of EAs advocate giving large portions (10%+) of your income away on a continuous basis, but it is important to remember that your circumstances may change — for example, you may lose your job or encounter a health crisis — so it is worthwhile considering saving some money in case you need it. You can always give away that saved money later, or change your mind if you decide you really need that money yourself.

Excessive moralising about EA can also cause you to — like a kind of inverse Dale CarnegieWikipedia — lose friends and fail to influence people. Arguably, persuading other people to give to good causes is best approached in an upbeat "look what you could achieve" way, rather than trying to guilt-trip people. (The latter is probably more likely to work on people who were already high-scrupulosity and thus more susceptible to EA ideas in the first place — so the value of "converting" such people to EA by guilt-tripping them could well be less than you might think, because they might have ended up being converted anyway.)

In the (unlikely) worst case scenario, you could lose all your non-EA friends through being seen as extremely preachy and arrogant, then later become financially ruined through a chance accident or illness leaving you unable to work, have no savings to fall back on — and then receive no help whatsoever from your EA friends despite all the past good you have done, because helping you is not an "effective" cause. This scenario is probably unlikely to pan out this way in practice though. Probably.

See also

Icon fun.svg For those of you in the mood, RationalWiki has a fun article about Effective altruism.

References

  1. http://www.ted.com/talks/peter_singer_the_why_and_how_of_effective_altruism.html
  2. except if they are billionaires, obviously
  3. Where I'm giving and why: Eric Friedman Effective Altruism Forum
  4. 4.0 4.1 http://effective-altruism.com/ea/5w/the_history_of_the_term_effective_altruism/
  5. http://sl4.org/wiki/action=history&id=EffectiveAltruism (archive)
  6. http://lesswrong.com/lw/hw/scope_insensitivity/
  7. https://www.lesswrong.com/posts/FCxHgPsDScx4C3H8n/efficient-charity
  8. http://www.jefftk.com/p/a-name-for-a-movement
  9. http://80000hours.org/earning-to-give
  10. [1]
  11. They claim "8 lives saved per dollar donated".img
  12. Thoughts on the Singularity Institute (SI) (Holden Karnovsky, LessWrong, 11 May 2012)
  13. Bostrom, Nick Pascal's Mugging
  14. Muehlhauser, Luke Four focus areas of effective altruism.
  15. Wiblin, Robert. Why improve nature when destroying it is so much easier?
  16. Ord, Toby. Why I'm Not a Negative Utilitarian
  17. 17.0 17.1 [2]
  18. Example
  19. http://www.effective-altruism.com/what-effective-altruism/
  20. EAs also disapprove of donating to train guide dogs (a.k.a. seeing-eye dogs in American English), although for completely different reasons to PETA. While PETA opposes the very concept of guide dogs because they believe that animals should not be owned, EAs note the high cost of training guide dogs compared to performing eye surgeries in poor countries, to actually many people of blindness for the same cost as a guide dog.
  21. Blog post, full results PDF
  22. Robin Hanson, talk at King's College London entitled "Robin Hanson on: Effective Altruism, Betting, Robots & More", 20 March 2016