Talk:Effective altruism/Archive1
Needs cleanup[edit]
Wow, this is so NOT what EA is about... But I guess, the authors know that for themselves... — Unsigned, by: 141.67.130.28 / talk / contribs 22:19, 30 April 2014 (UTC)
This reads like the work of one of the angrier, more luddite-ish Bay Area denizens who've been harassing Google employees of late. Wail, wail, kvetch, kvetch. This page should be kept up, but its content needs a serious purge. — Unsigned, by: 108.16.249.65 / talk / contribs 03:13, 4 May 2014 (UTC)
Seriously flawed[edit]
This is a seriously flawed page. The regular wikipedia page is much superior. http://en.wikipedia.org/wiki/Effective_altruism — Unsigned, by: 163.1.72.106 / talk / contribs 10:49, 25 June 2014 (UTC)
Detailed kvetching please[edit]
This talk page consists of kvetching about the kvetching in the article. I wish to kvetch about that.
Specifically, please add useful links, info and corrections. Ideally add them to the article (not just removing bits you don't like). Or add them here, and I'll look at them next time my wikiOCD strikes. --Chriswaterguy (talk) 14:48, 26 June 2014 (UTC)
- This seems like a bit of a double standard. The original author of the article (David Gerard, apparently) gets to make a bunch of unsubstantiated accusations against the Effective Altruism movement, and people aren't allowed to remove them? We need links, when the original article had no links to support many of his claims? I understand RationalWiki is not neutral POV but scientific POV, but it's not like EA is the kind of issue there's a scientific consensus on. Even if RationalWiki were "politically liberal POV," that wouldn't explain this article in its current form. I suspect, but cannot prove that most EAs are politically liberal (self-identified liberals and socialists make up a majority of LessWrong). So to explain why an article like this would be considered fit for RationalWiki, you need to assume "a specific kind of liberalism which assumes outsiders are dirty libertarians until proven otherwise POV." Or maybe "whoever happens to write an article on the topic first's POV."
- Incidentally, I tried adding a few [citation needed]s to the more tendentious parts of the article, only to have them reverted by the original author, who did so saying "opinion is allowed." Does this mean I can add my own opinions to the article, and they will be protected from edits? That actually might be kind of amusing, an article that simultaneously trash-talked everyone on both sides of the debate over Effective Altruism. --Chris Hallquist (talk) 14:38, 30 June 2014 (UTC)
- I was very unclear, sorry - too busy being flippant. I don't at all like to see wild and unsubstantiated accusations on any side, and the anti-EA tone of the article is irrational. Improving, though.
- My thought here was that I don't know much about EA, and good sources would help me. I didn't demand sources for the snarkier comments about EA, because it doesn't seem to work that way here, and I didn't think there would be any. --Chriswaterguy (talk) 01:27, 7 July 2014 (UTC)
Tried to patch things up[edit]
"Effective altruism" has nothing to do with libertarianism, so I tried to cut out most of the bullshit. I left plenty of room for people to add snark if they want, though. 68.38.214.117 (talk) 02:17, 27 June 2014 (UTC)
- Maybe the page could be divided into sections, one a fuller description of what EA claims to be and one on the libertarian apologia and LessWrong wankery that it actually is? I'm willing to do it in the fullness of time, but I don't want to step into a war.Calieber (talk) 02:34, 27 June 2014 (UTC)
- That seems like a more constructive attitude than "hey, let me just clear-cut the bullshit for you" as the bunch o'numbers above you has done. Sprocket J Cogswell (talk) 02:40, 27 June 2014 (UTC)
Okay, sorry about the edit warring. I deny that EA is libertarian apologia, but "LessWrong wankery" is something I can live with. CronoDAS (talk) 03:00, 27 June 2014 (UTC)
I'm adding the start of a proposed revision. Could someone work on this a bit more and then post it to the main page when it's approved? CronoDAS (talk) 04:11, 27 June 2014 (UTC)
Proposed revision[edit]
Effective altruism is a movement to improve the world through making carefully-targeted charitable donations - not only through making carefully-targeted charitable donations, but that is the overwhelming focus. Philosopher Peter Singer started the idea and buys into it big time, pushing it hard wherever he goes.<ref>http://www.ted.com/talks/peter_singer_the_why_and_how_of_effective_altruism.html</ref>
The sales pitch is that, if you're going to try make the world a better place for other people, you should try to do the best possible job you can. If you had the choice between helping a local community theater group put on a show or saving African children from from malaria, the right thing to do is, of course, to save the children. (We think.) People face dilemmas like this in real life whenever they donate money to charity: if you're not donating to the most cost-effective charities that you can, you fail at utilitarianism. (It's impossible not to fail at utilitarianism, but you can fail less hard.)
An approach that people that call themselves effective altruists commonly endorse is "earning to give": get the highest-paying job they can and then donate as much of it as possible. After all, you can get more done by paying a bunch of other people to solve problems for you than you can do all on your own, right?<ref>http://80000hours.org/earning-to-give</ref> Bill Gates is the poster child for this approach. He made billions of dollars by selling everyone Microsoft Windows, and then gave a lot of it to a foundation to do things like saving African children from malaria. The problem with this is that now everyone is stuck using fucking Microsoft Windows, so "earning to give" has not exactly been a huge public relations success for the movement.
== Where "Effective Altruists" actually send their money ==
According to William MacAskill of "The Effective Altruism Blog"<ref>http://www.effective-altruism.com/what-effective-altruism/</ref>, effective altruists currently tend to think that the most important causes to focus on are global poverty, factory farming, and the long-term future of life on Earth. In practice, this amounts to complaining when people decide to give money to community theater groups, feeling bad when people eat hamburgers, and sending money to Eliezer Yudkowsky, respectively.
==Footnotes== {{reflist}}
- Not terrible, go for something like it - David Gerard (talk) 16:52, 27 June 2014 (UTC)
Give David Pearce a chance to rewrite this entry?[edit]
David Pearce (who is a professional philosopher) could completely overhaul this entry. You could then add "critical elements" (ignoring root causes; being self-congratulatory; ...;) afterwards.
The current entry reads like an opinion piece to me, one which defends a very narrow position. It's more like a personal blog post than a wiki article.
What I am asking for is that, if he rewrites this article, you do not just revert it without explanation but rather edit it (and preferably discuss the changes beforehand). - XiXiDu (talk) 17:22, 27 June 2014 (UTC)
- See the two sections above that are already discussing a rewrite of the article.--ZooGuard (talk) 17:33, 27 June 2014 (UTC)
- wp:David Pearce (philosopher). He's been pointed at this talk page - David Gerard (talk) 18:17, 27 June 2014 (UTC)
- Umm, that's the same link. And I am not terribly impressed by its contents. I wonder how many "professional philosophers" run a web-hosting company. :) --ZooGuard (talk) 18:22, 27 June 2014 (UTC)
- I don't know. I'm a physicist and I own a web development firm, and a former supervisor of mine partially owns a patent holding company. While I'm not saying that kind of thing is normal, I doubt it speaks all that much to his professional abilities. That said, I have no knowledge of philosophy, so far all I know there might be reasons to doubt him. I just don't think that's necessarily one of them. - Grant (talk) 21:11, 27 June 2014 (UTC)
- Let's see... transhumanism theorist, soma for all, and something with the aroma of eugenics. WP lists one of his claims to fame as having been a speaker at a Singularity Institute conference. BRB, there must be some popcorn in the kitchen cabinet... Sprocket J Cogswell (talk) 22:04, 27 June 2014 (UTC)
- I don't know. I'm a physicist and I own a web development firm, and a former supervisor of mine partially owns a patent holding company. While I'm not saying that kind of thing is normal, I doubt it speaks all that much to his professional abilities. That said, I have no knowledge of philosophy, so far all I know there might be reasons to doubt him. I just don't think that's necessarily one of them. - Grant (talk) 21:11, 27 June 2014 (UTC)
- Umm, that's the same link. And I am not terribly impressed by its contents. I wonder how many "professional philosophers" run a web-hosting company. :) --ZooGuard (talk) 18:22, 27 June 2014 (UTC)
- Sprocket J Cogswell, IMO the IJ Good / Eliezer Yudkowsky conception of an Intelligence Explosion is profoundly mistaken. I critiqued it in the recently Springer "Singularities" volume. I've never attended, let alone spoken at, a Singularity Institute / MIRI conference. Zooguard, my co-founder of the WTA did stand-up comedy routines on the London circuit; but this hasn't stopped him writing illuminating philosophy. Likewise, the fact I happen to own a web hosting company has no bearing on the worth - or the lack of worth! - of the material I write. Anyhow, despite Alex Kruel's kind suggestion, there are lots of people better qualified than me to write about the Effective Altruist movement. I was just pointing out that the article, as it stands, does not do EA - or RationalWiki - justice. --Davidcpearce (talk) 06:31, 28 June 2014 (UTC)
- Even if there are other people better qualified than you, they don't seem to be doing anything about it. Take Yudkowsky, he's just whining about the LessWrong article but doesn't say what exactly he wants to change so people like me can attempt to change it. You are here and could do it. XiXiDu (talk) 09:02, 28 June 2014 (UTC)
- Alex, many thanks. If there were a non-negligible chance that an edited version of the rewritten article would stand, I'd do my (fallible) best and go ahead. But the fresh linking of "some members are debt-ridden students and unemployed" to "useful idiots" makes me wonder whether this would be a fruitful exercise. --Davidcpearce (talk) 09:42, 28 June 2014 (UTC)
- As the one who added the "fellow traveler" link (which redirects to useful idiot) I claim it is a bit of on-point snark, not tendentious. As the world turns again to gilded-age levels of income disparity, it helps to have a significant set of the lumpenproleteriat on board with the memes which support that disparity. Helps the upper crust, that is. It's been well-known for years that US "conservative" voters are supporting policies that are not in their own best interest. Exploring the talking-points/propaganda/bullshit, that frame public discourse in ways the masses can swallow, is a topic that sits squarely in RationalWiki's wheelhouse.
- Wikipedia notes that DP presented a talk at a Singularity Summit, in the "Affiliations and appearances" section of that article. If that is incorrect, well, it's a wiki, with all that entails. Sprocket J Cogswell (talk) 13:54, 28 June 2014 (UTC)
A response[edit]
As someone in the Effective Altruism movement, I'd like to respond to this article.
"Like other movements whose names are lies the advocates tell themselves ("race realism", "traditional marriage"), EA is not quite all that."
On the one hand, you have a point. Obviously, I think the name is accurate, but that's true in every case. The question is if you think it's better for everyone to pick a name that reflects what everyone agrees, or for them to pick a name that just reflects what they think about themselves.
On the other hand, if we call ourselves People Who Give to the Best of the Charities with Precisely Measurable Outcomes, or something equivalent but less wordy, then we're implicitly rejecting anyone who thinks there's something we can do that's significantly more effective but can't be easily measured, like political activism. I may think that it's better to stick with charities with precisely measurable outcomes, but I could be wrong, and I'd be a fool to ignore that possibility.
"The trouble is that EA is a mechanism to push the libertarian idea that charity is a replacement for government action or funding."
I would love for my government to donate more than a fraction of a percent of its income to foreign aid. I just don't think that spending my time and money on political activism is going to have as much of an effect. I don't think the government is generally very good at doing things efficiently. They might be able to match the effectiveness of individual charities by the general population. But if you look for the things that are too easy for people who don't put the thought into it to miss, you could easily find something an order of magnitude better.
"Who considered themselves obviously the most effective charity in the world"
Since I'm also a member of LessWrong, I'll respond to this. First off, MIRI (formerly known as SIAI) is distinct from LessWrong. LessWrong is an online community dedicated to generally becoming less wrong. MIRI is a non-profit organization dedicated to making a friendly artificial intelligence. I'm not exactly certain how related the two are, but you're making it sound like we're suggesting donating to the members of the LessWrong community.
Second, LessWrong isn't that monolithic. I'm pretty sure most of us do not donate to MIRI, for various reasons. Case in point: the article this sites about GiveWell saying that MIRI would cause more harm than good was posted on LessWrong.
One more thing I want to say: this article is generally insulting. You don't have to be right to insult someone, so it's not useful to actually find out what's right and what's wrong. It just makes you look good to a third party, while ensuring you and whoever you talk about won't change your minds. --76.114.7.69 (talk) 03:37, 30 June 2014 (UTC)
stray thoughts[edit]
- That last bit is the key to understanding most rhetoric, on the net or off it. Those with firm views, reasonable or not, are unlikely to be persuaded. In other words, in a debate, the disputants are unlikely to change each other's minds. It's the undecided ones, the audience (or the lurkers, in online terms) whose hearts and minds are at stake. Note well, hearts must be won as well as minds. Somewhere there is a French saying about the heart's irrationality. Thinking that "reason rules" is an incomplete view, and an error. Sprocket J Cogswell (talk) 15:56, 30 June 2014 (UTC)
- Convincing an unknown and unverifiable amount of lurkers is all well and good as potential icing on the cake, but what about yourself? It seems to me that discussions online are a huge waste of time and detriment to any participant who doesn't ensure that they are winning something out of it. The only way to have a discussion be beneficial to a participant is to have a change to yourself, no matter how small, come out of it. Increasing your knowledge base of opposing arguments, letting your opponents help you in reexamining your own views, helping your opponent reexamine his/her views and thus confirming the strength of your own arguments thereby solidifying your position, etc.. These are all outcomes of a discussion that may be to your benefit and mean that you haven't wasted your time discussing. These are the sorts of outcomes any discussion participant should be striving to acquire and attempt to steer the discussion towards. Nullahnung (talk) 16:15, 30 June 2014 (UTC)
- That seems backwards. Any benefit I may gain, from taking part in a discussion, is the icing on the cake. That usually takes the form of being prompted to find a clear, understandable way to articulate my viewpoint. I could get that benefit without posting publicly, although knowing that my tappings might be seen by other critical eyes makes an incentive to keep things from going too far off point. From the earliest days of online discussion, a common signoff was "hope this helps." Why bother posting, unless someone else may benefit from reading? Sprocket J Cogswell (talk) 15:01, 21 July 2014 (UTC)
- Au contraire! One of the main advantages of discussing with others is that they may help you understand some things. Not everything can be understood or reflected upon effectively by thinking it through yourself, unless you are a perfect genius. Of course anyone heeding the discussion, like lurkers, may gain something from you as well, but on the internet that comes with quite a lot of uncertainty, whereas what you yourself take from it is much more within your control. Nullahnung (talk) 15:12, 21 July 2014 (UTC)
- I have no interest in making what I gain be within my control. It is exactly the unexpected parts of whatever I read that provide food for thought. I have even less interest in controlling, or even measuring, what others may or may not gain from any discussion; that's up to them. Sprocket J Cogswell (talk) 15:26, 21 July 2014 (UTC)
- Yeah, fair enough. I don't want to come across as some kind of consequence-weighing machine that only makes calculated decisions, that would be far from the truth. I was just musing about what the most constructive attitude towards discussions online is. Nullahnung (talk) 15:39, 21 July 2014 (UTC)
- To be honest, I wouldn't be here if there weren't something in it for me. That includes the wikignome parts of my RW activity, since I like seeing things tidy and easy to reach. Still, I've noticed how easy it is to get narrowly focused on one other party in a conversation, and lose sight of the fact that we are creating a public record, one which will remain for at least a while after we go our separate ways.
- We've drifted— none of this has been relevant to BoN 76.114's topic. Sorry, folks— I will let it rest now. Thanks for reading, Sprocket J Cogswell (talk) 16:49, 21 July 2014 (UTC)
- Yeah, fair enough. I don't want to come across as some kind of consequence-weighing machine that only makes calculated decisions, that would be far from the truth. I was just musing about what the most constructive attitude towards discussions online is. Nullahnung (talk) 15:39, 21 July 2014 (UTC)
- I have no interest in making what I gain be within my control. It is exactly the unexpected parts of whatever I read that provide food for thought. I have even less interest in controlling, or even measuring, what others may or may not gain from any discussion; that's up to them. Sprocket J Cogswell (talk) 15:26, 21 July 2014 (UTC)
- Au contraire! One of the main advantages of discussing with others is that they may help you understand some things. Not everything can be understood or reflected upon effectively by thinking it through yourself, unless you are a perfect genius. Of course anyone heeding the discussion, like lurkers, may gain something from you as well, but on the internet that comes with quite a lot of uncertainty, whereas what you yourself take from it is much more within your control. Nullahnung (talk) 15:12, 21 July 2014 (UTC)
- That seems backwards. Any benefit I may gain, from taking part in a discussion, is the icing on the cake. That usually takes the form of being prompted to find a clear, understandable way to articulate my viewpoint. I could get that benefit without posting publicly, although knowing that my tappings might be seen by other critical eyes makes an incentive to keep things from going too far off point. From the earliest days of online discussion, a common signoff was "hope this helps." Why bother posting, unless someone else may benefit from reading? Sprocket J Cogswell (talk) 15:01, 21 July 2014 (UTC)
- Convincing an unknown and unverifiable amount of lurkers is all well and good as potential icing on the cake, but what about yourself? It seems to me that discussions online are a huge waste of time and detriment to any participant who doesn't ensure that they are winning something out of it. The only way to have a discussion be beneficial to a participant is to have a change to yourself, no matter how small, come out of it. Increasing your knowledge base of opposing arguments, letting your opponents help you in reexamining your own views, helping your opponent reexamine his/her views and thus confirming the strength of your own arguments thereby solidifying your position, etc.. These are all outcomes of a discussion that may be to your benefit and mean that you haven't wasted your time discussing. These are the sorts of outcomes any discussion participant should be striving to acquire and attempt to steer the discussion towards. Nullahnung (talk) 16:15, 30 June 2014 (UTC)
The Goldman Sachs calculation[edit]
I know very little about EA. But isn't it futile to try to make such calculations? There seems to be no way to tell whether working for Goldman Sachs will be positive or negative due to ripple effects.
Let's assume you donate all of the money made at Goldman Sachs to Against Malaria Foundation.
First of all, I strongly doubt that it is possible to tell whether donating money to Against Malaria Foundation is worthwhile, because the long term detriments of our actions are uncomputable. Take for example friendly AI research. Even if we accept all the premises underlying friendly AI research as sound, trying to create a friendly AI could easily increase the probability of hell world outcomes, because failing on a crucial detail means that humans are kept alive, but under horrible circumstances.
Secondly, even if you could calculate that the damage you cause at Goldman Sachs could be outweighed by donating to a charity of your choice, other Goldman Sachs employees are not going to do that. And doing a good job for Goldman Sachs means that Goldman Sachs will be able to hire more people that do not donate money but instead cause more damage.
In conclusion, EA people seem to be rationalizing their actions big time. - XiXiDu (talk) 11:21, 30 June 2014 (UTC)
- Hallq's addition is worth keeping as a terrible argument, though with the terribleness pointed out. Mind you, he put a "citatdeliberatelyion needed" on the idea that Goldman Sachs was "fucking evil" - David Gerard (talk) 11:32, 30 June 2014 (UTC)
- I'm glad we agree on this much, David. I added that part because I figured the anti-Effective Altruism folks here would appreciate documentation of the fact that some EAs have argued working at Goldman Sachs is justified as long as you aren't killing 100 people per year. Chris Hallquist (talk) 13:42, 30 June 2014 (UTC)
- I took it out because it looked like specious back-of-the-envelope calculation that boiled down to "look at this irrelevant but incredibly large number!11!!" In the circles where it is taken seriously,{{who?}} I suppose it is meant to be a bit of reductio ad absurdum, since no commercial undertaking has the kind of reach it takes to kill 5% of everybody now alive. Or do they???
- If that bit goes back, it will need a note along the lines of "they actually believe that
Xenu sent us here in airliner-shaped spacecraft, so we could do good by making lotsa bux and sprinkling them on charities we think are workingworking at Goldman Sachs is justified as long as you aren't killing 100 people per year."
- If that bit goes back, it will need a note along the lines of "they actually believe that
- My snotty attitude notwithstanding, I am neither pro- nor anti-EA, since I hardly know 'em. You may count me with the anti-BS crowd, though, loud, proud, and persevering. Sprocket J Cogswell (talk) 15:02, 30 June 2014 (UTC)
- There's a reason why this kind of consequentialism (let's steal the poor guy's organs so we can save a few other people) is mostly frowned upon. Because people suck at making such judgements. Which is why you should assign much more weight to exploration rather than exploitation. In other words, try to make your calculations more robust, and verify your ethical principles. Don't draw action relevant conclusions from back-of-the-envelope calculations.
- I've no idea how these people believe that they can calculate that saving a bunch of poor people, living under the threshold where life is worth living, might not lead to more suffering by e.g. increasing conflicts over spare resources. And working for a system that might be responsible for inequality in order to reduce inequality also seems weird. - XiXiDu (talk) 15:31, 30 June 2014 (UTC)
- Goldman Sachs is really a spectacularly awful example, considering they caused the world food shortage of 2008. We starved you, but here! we're sending you malaria nets! - David Gerard (talk) 16:25, 30 June 2014 (UTC)
- David, I assume the EA response would be that their role in the world food shortage means a recalculation is required of any estimate of how many deaths Goldman Sachs is actually responsible for. That makes it a difficult (but very interesting) calculation. --Chriswaterguy (talk) 02:37, 7 July 2014 (UTC)
- That's really not at all good enough - that they left it out of their calculation in the first place suggests bottom-line thinking (like it wasn't bloody obvious that this is what this is) - David Gerard (talk) 10:22, 7 July 2014 (UTC)
- XiXiDu: "living under the threshold where life is worth living". Wow - are you sure you mean that? I have a much, much bigger issue with someone deciding that other people's lives aren't worth living than I could ever have with EA's position that we should save as many lives as possible. --Chriswaterguy (talk) 02:37, 7 July 2014 (UTC)
- Chris Hallquist: "some EAs have argued working at Goldman Sachs is justified as long as you aren't killing 100 people per year." Technically, yes, but I think it's more about calculating an outside limit on how evil a job would have to be before this thinking would stop making any sense. I don't think anyone's suggesting that you should work-to-give this way, on a job that kills 50 people a year. Intuitively, it seems unlikely that working at Goldman Sachs would kill anywhere near that many people, if it kills anyone at all. That intuition may be wrong, of course, especially in light of David's comment about the world food shortage. (I also place a high uncertainty on my intuition here, given that I don't know a lot about Goldman Sachs.)
- I may be convinced to lean one way or another by calculations, but I try not to be too influenced by snark (unless it contains an actual argument). --Chriswaterguy (talk) 02:37, 7 July 2014 (UTC)
- XiXiDu: "isn't it futile to try to make such calculations?" As an engineer, I've had to face the awkwardness of risk management calculations and get comfortable with them, and this looks much the same. We live in a world with finite money, so be thankful that automotive, aeronautical construction and public health engineers make calculations about cost vs lives saved - they've probably already saved your life or the life of someone you care about. IMO it's irresponsible not to make such calculations.
- Even if I'm wrong, and it's somehow virtuous to not think these thoughts, EA is based on such an approach. A calculation of this type needs to be in the article in some form.
- Sprocket J Cogswell removed the edit with the comment rm disingenuous "analysis" assuming *every* Sachs employee would donate a quarter of a million bucks annually to the malaria mitigation fund. The analysis makes no such assumption. I'll just flag that I plan to reinstate the edit, with some modifications. --Chriswaterguy (talk) 01:54, 7 July 2014 (UTC)
- I didn't mean to imply that we should just give up and don't calculate. My view is more nuanced than that.
- What I am troubled by are people who maximize their impact (people who maximize exploitation) based on shaky premises and fragile calculations. If there is a very good chance that your impact might as well turn out to be negative, then you should instead focus on making your calculations more robust (exploration).
- Naively focusing on achieving far greater results with less money will make you favor causes that promise a lot, but whose promises are based on unstable back-of-the-envelope calculations. In such cases the cure might be worse than the disease. Take geoengineering. One should be extremely conservative about trying to influence Earth's climate on a global scale. You might as well make everything worse.
- "Naively focusing on achieving far greater results with less money will make you favor causes that promise a lot, but whose promises are based on unstable back-of-the-envelope calculations." If they're big native in this way, then that's relevant. The little I've seen of EA suggests plenty of nuance and balance, and the Goldman Sachs example, then, looks like just an illustration of the basic principle of earning to give. But clearly an example that rubs some people the wrong way. --Chriswaterguy (talk) 14:46, 7 July 2014 (UTC)
- OK, on second look I see what that calculation is trying to point out, but it is still a bogus analysis, with several flaws.
- The numbers smell like they were pulled out of someone's ass. It would be interesting to see the the actual salary/compensation distribution over all Goldman Sachs employees.
- The kind of guy who performs at a level to gross $500k/yr is extremely unlikely to feel the need to buy a clear conscience by donating half of that gross to the "mosquito nets for the Mruna" fund.
- Even if such a fictitious marginal employee did exist, and did save 100 lives annually, how is he justified in working for a company whose activity leads to any preventable death at all? By the cold logic of the analysis, his fraction of company-caused deaths could amount to any number under 100, and he could still walk away from it whistling.
- The analysis says nothing about the deaths that his philanthropy did not offset. By supporting the company with his intentional effort, he still bears some responsibility for those deaths and the suffering of the ones they leave behind. Sorry, but buying one's way out of that kind of culpability is not a noble or praiseworthy deed. Sprocket J Cogswell (talk) 04:23, 7 July 2014 (UTC)
- "The numbers smell..." Sure. I don't see that it's a significant flaw for an illustration.
- "The kind of guy who..." This isn't about the kind of guy you have in mind. It's about the person who chooses to earn to give.
- Your other comments don't address the argument. It's not an argument about what feels right to our moral brains, but an argument (open to challenge) about the value of earning in being able to have a large net positive impact.
- Now I'm wondering whether any EA people have actually worked for Goldman Sachs, or if this is purely an illustration. --Chriswaterguy (talk) 14:46, 7 July 2014 (UTC)
- As previously pointed out, it is a spectacularly bad illustration. It starts from a risible premise (there could be this guy at Goldman Sachs giving half of his $500k salary to save the Mruna from malaria) and goes on to imply that he would have a net positive impact if his charitable contributions could be shown to save even one more life than his "share" of the death toll attributable to the company's negligent business practices. It is naive to think that arguments stand or fall only on logic or accounting. Have you ever sat on a jury? Sprocket J Cogswell (talk) 15:09, 7 July 2014 (UTC)
EAs and education?[edit]
What's their stance on education? My impression had been that they generally under-value education and other enablement, and even discourage it, because the effects of anything of that kind are much harder to evaluate than the effects of malaria nets. Saving a drowning child on your way to work, ruining a thousand dollar suit - the child can be reasonably expected to live a normal life afterwards no worse than your own. Education in the third world, one can reasonably expect general enablement, decrease in birth rate, and so on.
Saving someone from malaria without trying to address the reason why they were liable to die of it in the first place... that is well intended, no doubt, but it may just be that the death of disease is the only third world problem that rich can presently understand and relate to, as no matter how rich you are you can eventually die of disease. The utter lack of opportunity, the systematic prohibition from employment where it exist through construction of oppressive fences and checkpoints to keep the poor in the opportunity-free zones, those are some of the issues that the first world denizens can not understand or relate to, and often, even to acknowledge. Dmytry (talk) 20:50, 3 July 2014 (UTC)
new article[edit]
Seems like there was a lot of discussion of a rewrite with no rewrite. So I wrote it. Edits are appreciated and probably much needed. User:Kelseypiper 11:06, 20 July 2014 (UTC)
- I just rolled it back to status quo ante today's edit skirmish started by some Bay Area BoN. This talk page is now the place for discussing proposed changes to the tone of the article. Sprocket J Cogswell (talk) 13:26, 20 July 2014 (UTC)
- Yeah, that was Kelsey, who's just created a login to discuss said rewrite (we've just been emailing about it). Treat in good faith - David Gerard (talk) 14:00, 20 July 2014 (UTC)
Snark vs argument[edit]
I think this article would be better if it actually made its arguments (e.g. the comparison with "think of the children" in politics) rather than just snarked with links and jokes. But possibly that would expose weaknesses in the (currently implicit) arguments, so we can't have that, can we?--Greenrd (talk) 07:20, 23 May 2015 (UTC)
- +1
- Using snark in place of actual arguments is profoundly irrational. I understand that there is a cultural value placed on snark, here, but if there's no willingness to be rational, what are we doing here? --Chriswaterguy (talk) 08:13, 23 May 2015 (UTC)
- Eh, so what's actually wrong with the article? It seems a bit odd to grumble about too much snark and too few arguments and not provide any examples about what's actually wrong and/or concrete suggestions for improvements. Or, and here I'm going out on a limb, perhaps you could just forge ahead with some improvements? (A radical suggestion on a wiki, I know, but bear with me). ScepticWombat (talk) 14:27, 25 May 2015 (UTC)
--"Where "Effective Altruists" actually send their money
According to William MacAskill of "The Effective Altruism Blog"[9], effective altruists currently tend to think that the most important causes to focus on are global poverty, factory farming, and the long-term future of life on Earth. In practice, this amounts to complaining when people try to solve local problems, feeling bad when people eat hamburgers, and sending money to Eliezer Yudkowsky, respectively." Alternatively, you can say that EAs have given in the tune of millions of dollars for malaria bednets, fighting off parasitic worms, campaigning against factory farming, etc. You can say that over a thousand people, mostly in the first world, some traditionally seen as "poor" have pledged 10% of their lifelong incomes into making the world a better place. You can say their EAs have quit promising programs in philosophy PhDs to live on minimum wage while working in Wall Street. You can say that EAs constantly re-evaluate their giving decisions, and feel no small sense of responsibility at the weight of the world...a literal Holocaust every year of people that die from preventable causes. You can say that Eliezer Yudkowsky and his MIRI is actually fourth on the list of charities EAs donate to, behind the Against Malaria Foundation, SCI and Give Directly. http://effectivealtruismhub.com/sites/effectivealtruismhub.com/files/survey/2014/results-and-analysis.pdf
But you know, that would be admitting that sometimes other people actually do good instead of spending their free time feeling self-righteous about not being redneck nutjobs on a fuckin' wiki that is strictly worse than wikipedia. — Unsigned, by: 75.134.23.209 / talk 04:05, 3 August 2015
LW "considers itself an obvious beneficiary" of EA - source?[edit]
I was misreading the line: "Effective Altruism is also pushed by Bay Area technolibertarians and LessWrong. The latter, of course, considers itself an obvious beneficiary (if not the obvious beneficiary)." I'd assumed it was meant that EA as a movement would benefit LW in some way. But David Gerard's edit with edit summary: LW pretty clearly isn't actually one, given e.g. the GiveWell assessment. Unless blog posts and fanfic save amazing numbers of lives. (The same reversion had been made by an anon, but without the explanation.)
Okay, good, now I know what's going on. But for the statement itself - is it just bitter and cynical, or is there a justification for it? --Chriswaterguy (talk) 13:06, 25 May 2015 (UTC)
- They have touted themselves as one and are well-cited as to claims of "800 lives saved per $100 donated". I can dig up cites at my considerable leisure (the smoking gun quotes are from EY in videos, so are a PITA to nail) but they've put themselves forward as the ideal candidate lots and lots - David Gerard (talk) 22:01, 25 May 2015 (UTC)
- Okay, wasn't aware - would be good to know where that's been stated.
- I'd be surprised if EY still believed that of LW itself, and even more surprised if many other LWers believed it. --Chriswaterguy (talk) 23:55, 26 May 2015 (UTC)
- EY still believes MIRI's mission is the most important thing in the world. Why would you think he didn't? - David Gerard (talk) 12:09, 27 May 2015 (UTC)
Snark against Bill Gates difficult to understand?[edit]
Are people seriously saying that "everyone stuck using fucking Microsoft Windows" is a bigger problem than malaria, malnutrition, etc? I mean, I know this site is all about First World Problems, but... — Unsigned, by: 75.134.23.209 / talk 04:10, 3 August 2015
- Eh no, and that's not what the text actually says either... ScepticWombat (talk) 06:02, 3 August 2015 (UTC)
--"[Bill Gates] gave a lot of it to a foundation to do things like saving African children from malaria. The problem with this is that now everyone is stuck using fucking Microsoft Windows," How else am I supposed to interpret this claim? At the most charitable, this is saying that "everybody stuck using f-cking Microsoft Windows" is a problem on the same order as malaria. From context, the author does not appear to be saying anything close to "This approach is awesome but it has minor flaws." — Unsigned, by: 199.204.56.140 / talk / contribs (signed by bot) 01:25, 04 August 2015 (UTC)
This article is largely terrible[edit]
and I started it, so there I go. Greenrd, you feel like giving it a top-to-bottom cleanout? You know the LW rationalist sphere reasonably well.
(At least it's completely unimportant. As far as I can tell, about the only people reading it are EAs.) - David Gerard (talk) 18:53, 7 September 2015 (UTC)
- Yup - cool, will give the rewrite a go. I was planning to write about criticisms of EA anyway - I just did not expect to be doing it on rationalwiki! I'll try to make the article somewhat balanced.--Greenrd (talk) 13:13, 8 September 2015 (UTC)
points to discuss[edit]
Add others - David Gerard (talk) 15:58, 8 September 2015 (UTC)
- Why Peter Singer is Wrong About ‘Effective Altruism’ - tl;dr charities need someone who actually gives a damn about that particular cause, and those are not fungible.
- Lotta EAs talk about EA as if haha it has nothing to do with those silly MIRI ideas any more, no, we're sensible now! ... except the Dylan Matthews article extensively documenting otherwise. It's a serious problem that MIRI have yet to be laughed out of the movement.
- The worming thing, Cochrane review vs EA, needs noting.
- This article is a parody, though written by an EA for EAs - take care not to raise it as a serious source.
- Art is a selfish waste of time. (Note Yudkowsky previously stating there was no place for art in the Singularity.)
- GiveWell are heartily part of EA, but seem reasonably sane and balanced.
- EA and disability. Singer is right into the concept of some lives being less worth living than others, particularly the disabled, up to and including post-natal abortion, and anyone who's disabled or has a loved one who is will know that the medical system leans this way in practice also. Scott Alexander was asked directly about this (by a disabled person who was quite experienced in being treated as less than human by the medical profession) and, to be super-generous, fumbled spectacularly. But I wouldn't use a single ghastly Tumblr post with even worse attempts at explanation as a basis for a claim about "EA" and wonder what the official and unofficial word is. That said, LRB notices this problem too - it's a huge and obvious one when your founder says terrible things in the name of the same utilitarianism that's the core of EA, and they really just don't address it ("problematic fave" is really not adequate).
Pascal's wager[edit]
A See Also link to Pascal's wager was just added. While I can see the obvious relevance Pascal's wager has to Roko's Basilisk, it's not clear to me what the relevance of Pascal's wager is to EA. As far as I'm aware, most EAs aren't EAs for religious reasons (ignoring my tongue-in-cheek comparison of EA to a religion itself), and Roko's Basilisk isn't a hot topic in the EA community, so I don't think it's worth including Pascals wager as a see also if it's only based on indirect connections via either of those things.--Greenrd (talk) 14:09, 8 September 2015 (UTC)
- Both EA and the Wager are mental exercises in maximizing some poorly specified Good Thing by applying logical reasoning to shaky premises. MaillardFillmore (talk) 14:31, 8 September 2015 (UTC)
- I'm unconvinced Roko's basilisk has any relevance to EA. EAs are part of the LessWrongsphere, and tend to deny EY while still behaving as though they believe the ideas (c.f. MIRI not having been completely laughed out of the movement, as would have happened if this inanity wasn't heartily embraced by a pile of them) - to use local jargon, they claim not to believe, but clearly alieve - but I've never seen a more direct connection. Pascal's mugging arguments, however, were openly used by Nick Bostrom to push the MIRI agenda at the EA meet per Dylan Matthews article (a more Singerian EA who was pissed off at the Yudkowskian EAs?) - David Gerard (talk) 15:39, 8 September 2015 (UTC)
- It does have relevance. If you re-read Roko's basilisk/Original post, it is clearly talking about existential risk, which is one of the three most prominent EA "causes" that EAs care about - it is not talking about (dis)utility in general, contrary to what the Roko's basilisk article claims. But that's by-the-by because even if it was, EA is still about being practically utilitarian and donating to "the most effective" charities. If a utilitarian is donating to (what they believe to be) "the most effective" charities, they are an effective altruist whether they want to be seen as one or not, as far as I'm concerned - that's really on-point to what the term "effective altruist" is supposed to mean. And it does mean beliefs about what is most effective, not the "truth", because no-one has a crystal ball. So it's very relevant either way - at least, I think the basilisk article should continue to link to the EA article, I'm not sure about the reverse. To be clear, I am not saying that I believe the basilisk idea.--Greenrd (talk) 16:20, 8 September 2015 (UTC)
- RB is about existential risk, EA mentions existential risk ... that's "A has X, B has X, therefore A is a notable in regards to B", which I don't think holds at all. Do you have anything more direct relating RB to EA, e.g. discussions in EA circles?
- Also, I'm not sure the RB article should really link back here - "Effective Altruism" as a buzzword hadn't even been coined then IIRC. Though it is a good example of trying to take the most-effective-altruist thing way too far - David Gerard (talk) 17:31, 8 September 2015 (UTC)
- Thinking further on this, hmm. EY did push EA super-hard even before it was tagged that and got a lot of LWers into it (which is why MIRI hasn't been laughed out yet). Tricky one - David Gerard (talk) 17:49, 8 September 2015 (UTC)
Not much rationality in this article[edit]
Looking through the last year of talk discussion it looks like nothing has really changed - this page is a mess of snark and decidedly irrational attacks (a list of comparisons with religion, for instance). Someone ought to take a hatchet job to this page - I would gladly do so, but obviously not until we reach consensus. 129.81.28.193 (talk) 16:44, 2 October 2015 (UTC)
- User:Greenrd, you still up for the task? - David Gerard (talk) 19:48, 2 October 2015 (UTC)
- Yes, it will be my new year's resolution to do this. I think I've done enough research into EA by now.--Greenrd (talk) 21:09, 28 December 2015 (UTC)
- It is quite a bit better now, thanks. There is much less snark and it is much less irrational. I would note that the earning to give section sorely needs citations for its contentious claims and original research, and Givewell is now more in favor of funding MIRI/FHI/FLI. But those are separate issues. There is still some snark but it doesn't read like a bad Cracked article :) However, I'd expect this article will need a thorough rewrite anyway due to its structure and layout, which needs to be more intuitive and comprehensive. There are still other problems but it could be a lot worse.24.251.35.159 (talk) 08:04, 12 March 2016 (UTC)
- Yes, it will be my new year's resolution to do this. I think I've done enough research into EA by now.--Greenrd (talk) 21:09, 28 December 2015 (UTC)