Talk:LessWrong/Archive18

From RationalWiki
Jump to navigation Jump to search

This is an archive page, last updated 3 May 2016. Please do not make edits to this page.
Archives for this talk page: , (new)(back)

So which fallacy is the Noncentral Fallacy?[edit]

It's been variously attributed on this talk page as the continuum fallacy and on the article as poisoning the well. Is it identical to both, identical to neither or what? - David Gerard (talk) 22:06, 14 June 2013 (UTC)

I thought the continuum fallacy was the fallacy of the grey. In any case, do you really believe Yvain has come up with a new logical fallacy that nobody knew about before?--Baloney Detection (talk) 19:05, 16 June 2013 (UTC)
The fallacy of gray is when someone dismisses an attempt at judgment/improvement by declaring that the situation is complicated and that perfection is unattainable. It is mostly the same as "making the perfect the enemy of the good."--ADtalkModerator 20:05, 16 June 2013 (UTC)
These logical fallacies are "informal fallacies", meaning that there's dozens ways of categorizing all types of bad thinking and bad argumentation, in more and less specific ways. There's nothing in the 'poisoning the well' fallacy that demands that you call X a member of category Y, when X genuinely belongs to Y but is not a central/typical example thereof -- which is the description of the noncentral fallacies. MEANING THAT THE TWO ARE DIFFERENT SETS. Possibly overlapping but *different*. You can argue that the "noncentral fallacy" is a *subtype* of poisoning the well, and then I could very well agree with you, but you can't claim with a serious face that it is the same as "poisoning the well". Aris Katsaris (talk)
Hmm, wheb reading it again the intro seems more like guilt by association, but the examples are poisoning the well. In any case, what's wrong with the already existing age-old terminology? It seems to me to be just a case for LW to try to appear smart.--Baloney Detection (talk) 21:11, 16 June 2013 (UTC)
Fallacy of grey/continuum fallacy is not "making the perfect the enemy of the good." That would imply an actual judgment can be made. This fallacy is more along the lines of, for example, "There is no clear demarcation between science and pseudoscience so everything's made up and the points don't matter." Nebuchadnezzar (talk) 21:37, 16 June 2013 (UTC)
Hm. Ok.--ADtalkModerator 22:02, 16 June 2013 (UTC)
Yes, "making the perfect the enemy of the good" is the Nirvana fallacy.
These informal fallacies overlap and shade into each other. Although there's generally never justification for a neologism in amateur philosophy, the "noncentral fallacy" isn't really any of these things, although it shades into several of them - David Gerard (talk) 22:43, 16 June 2013 (UTC)

Another thread about LW/Yud on the Space Battles forum[edit]

It really is a place where people aren't willing to kiss Yudkowsky's feet just because he says "rational" 78 times a day. Here is a fine post pointing out the problem with Yudkowsky's claim of being an autodidact: http://forums.spacebattles.com/threads/is-eliezer-yudkowsky-insane.257341/page-6#post-10663540 --Baloney Detection (talk) 23:17, 17 June 2013 (UTC)

LOL! That thread begins by treating "3^^^^3" as if it meant "billions". Thanks for that bit of comedy, right there, Baloney Detection, I'm literally laughing out of sheer amusement right now! And it's a *very* good example of how most critics of LessWrong are complete and total *buffoons*! Oh, man.... Really, thanks for that! Aris Katsaris (talk) 23:53, 17 June 2013 (UTC)
I have some experience with arguing with religious people, Muslims in particular. They tend to act as if Darwin was my prophet, meaning that if they find a single error that Darwin said or wrote, it invalidates the entire theory of evolution. That is how you are doing it. Find a single error in the critiques of Yudkowsky and suddenly they are all completely invalid. Just face it, your guru is a self-inflated fool who will be completely forgotten within a century (and if his OkCupid profile is correct, not have any descendents).--Baloney Detection (talk) 10:13, 20 June 2013 (UTC)
You don't see the irony, here? You're the one who explicitly and obviously acts as if Yudkowsky is *my* prophet, much like you say the religious do with Darwin. *I* am the one who addresses each idea directly. I didn't need to mention Yudkowsky at all when I supported the concept of Bayesianism as stronger than science, nor do I need to discuss him when I choose torture-over-specks or when I support one-boxing. I address the points directly without need to reference the people who espouse or oppose the idea. You on the other hand are utterly incapable of addressing any idea by itself, and hence you're constantly talking about how supposedly Yudkowsky is my "guru" or my "prophet" or whatever, and throw personal insults into every discussion. I somehow care about whether Yudkowsky will be forgotten or not, or whether he will have descendants? How delusional can you get? You even accused AD of being a Yudkowsky drone for chrissakes! (Btw, as a sidenote: I've assigned greater probability to Max Tegmark being remembered a thousand years from now, as I mentioned before at http://adamcadre.livejournal.com/180985.html )
To summarize all the above -- it's you who are obsessed with Yudkowsky, not me. He's not my guru, nor my prophet, and I am not even sure whether I like him as a person or not -- though I certainly like him more than I like *you*, that's a quite low standard to pass! Aris Katsaris (talk) 11:27, 20 June 2013 (UTC)
Aris, man, if you're donating some sort of allowance, government stipend or the like to MIRI, go talk to counsellor or relatives or someone please, try to explain why you are donating real money for someone with Yudkowsky's list of actual accomplishments to get 100k$/year. Seriously. Yudkowsky's expressed beliefs in how likely cryonics is or how worthwhile it would be for him to live transhuman life are entirely incompatible - by ridiculous number of orders of magnitude - with how much he takes out from this ridiculous cause. I realized that I can't possibly change your views on the crazy shit like doomsdays and plausibility of basilisks but you got to at least realize that Yudkowsky does not believe in stuff any more than I do. As for treating 3^^^3 as if it means billions, first off, Yudkowsky is an uneducated idiot because there simply can't be 3^^^3 distinct people (let alone 3^^^^3), and secondarily 3^^^3 is not the smallest number that would work, it's just Yudkowsky read of up arrow somewhere once. edit: and technically, 3^^^3 is 3^^^3*10-9 billions. Dmytry (talk) 18:39, 21 June 2013 (UTC)
LOL! Damn, this talk section keeps providing hilarious comedy for me. "Allowance or government stipend"? I know you meant it just as an insult and don't care about it really, but let's pretend that you're really interested in me: I work in the private sector (not the kleptomaniac public sector of Greece) and I'm likely in a better financial condition than all my close relatives combined, so no: it's just my own money that I'm donating. And if your point about talking to a relative is supposed to be "your relatives would disapprove", eh, I don't care? Or more accurately, exactly as with my atheism, I care enough about not causing them distress that I wouldn't inform them about it. Doesn't mean my atheism is wrong. Aris Katsaris (talk) 19:53, 21 June 2013 (UTC)
As for your points regarding "3^^^3", didn't you notice that your second point stepped all over the toes of the first? Indeed 3^^^3 is not the smallest number that can work, (I once calculated something like 10^21 dustspecks might suffice, but obviously for numbers as tiny as 10^21 my judgment is much less certain) -- and either way don't you see that this is completely missing the point? The point of the dilemma is whether it can be ethically worse to have a tiny disutility multiplied across a vast number of people than a large disutility to a single person. If dustspecks don't mean anything to you, you can have use something larger like papercuts. And if papercuts don't work, use migraines. And if migraines don't work for you, use something even larger like broken legs. Is there a number X where you would yourself admit "It's ethically preferable to allow a single person to be tortured 50 years than allow X people to have their legs broken." ? Aris Katsaris (talk) 19:53, 21 June 2013 (UTC)
"3^^^3 is 3^^^3*10-9 billions". LOL! Come, Dmytry, the comedy you provide is supposed to be *unintentional*, I can't properly laugh when I know you really mean it as a joke! As for your judgments on other people's honesty, your own judgment of *me* has been bad enough, that I have first-hand reasons to not take your judgment of Yudkowsky as significant Bayesian evidence. Aris Katsaris (talk) 19:53, 21 June 2013 (UTC)
You look like you're a student, you have boatloads of spare time, hence the assumption wrt the stipend of some kind, and I don't see any big shot income source that would explain that (in my case I successfully published a computer game). Yes, the ethical view endorsed by Yudkowsky is that there is some number of dustspecks so large that one should rather torture someone for 50 years than allow that many dustspecks in that many eyes, and the point of those describing it as "billions" is that there is no such number, however large, and it is entirely irrelevant to their argument that they describe it as billions. No the view endorsed by Yudkowsky is not made any more saner by it being 3^^^^3. This is a common pattern with religious nutjobs indeed - they will point out to an entirely unimportant detail - which you right now even acknowledge as unimportant to the core point! - as the reason why it is not insane. As for Yudkowsky's honesty, once again: the guy takes out at least $100k/year from the cause, which is by the way a limit imposed by Peter Thiel. The guy is signed up for cryonics and believes it works and is incredibly worthwhile given the huge reward of living forever (reward which is of course only realized if AI doesn't kill us all). The guy says he believes the cause is important to saving mankind (and himself with it), important enough for e.g. you to donate to it. That just doesn't add up coherently, can you understand that? Meanwhile simple selfishness explains everything perfectly. Now of course if you ask him he can go on all about how wasting 100k/year is the best thing that could be done because it's spent on his fun and that fun helps him work. It doesn't change the fact that there is a trade-off between living now, and living later, which is very severe when one believes in a very long lifespan in the future. The beliefs of agents can be inferred from the actions. The whole charade looked a lot less stupid back when he was going on poetic how he only needs a roof to keep rain off his books and an internet connection. (By the way there was an error that you didn't notice - misremembered the up arrow notation - 3^^^3 is not a very large number, 3^^^^3 is). Dmytry (talk) 14:35, 22 June 2013 (UTC) edit: also, note the "if" in my sentence, hardly a judgement yet. If you're indeed donating your own money, well, what ever. Dmytry (talk) 14:45, 22 June 2013 (UTC)
To answer to your babble in summary: (1) Don't assume that "I look like a student" because I have uploaded some photos that were taken 10 years or so ago. Those speak more about my vanity than about my age, since I don't like my more recent "wow, you've grown even fatter!" photos. I'm 34 years old. (2) You are just claiming as a dogma that no such number exists, rather than argue reasonably why it's so, and you downright chose to ignore my followup questions which uses "broken legs" rather than "dust specks". You say my position is insane, but it's you who can't handle logical reasoning or followup questions. Here you exemplify the fundamental difference in ethos that makes LessWrongers be fans of LessWrong -- we like to discuss things, and we're tired of people trying to bash people into submission by arguments no more coherent than "You are STRANGE TO ME, and hold different OPINIONS, and are therefore MAD, and therefore I'll just MOCK you right now." (3) Your obsession/fascination with Yudkowsky really is boring to me. And your argument (which effectively boils down to a claim of 'everyone who's not a scam-artist must live as a hermit') is far from as conclusive as you seem to think it to be. But you know what? As long as there's no other organization competing for my donations with MIRI in solving these FAI issues, it doesn't actually even *matter* for the purpose of my donations whether Yudkowsky himself is a scam-artist or not. Once again, the thing about him-not-being-my-guru-or-my-prophet? Do try to keep it in mind, and that'll help you realize that you need to attack *arguments*, rather than think I'll somehow be upset because he's not acting like John the Baptist. (4) 3^^^3 not a very large number? It's unimaginably large, much larger than the atoms in the observable universe, much larger than the googolplex, a 3^(3^(3^(...3^3) tower which is 3^^3=7,625,597,484,987 iterations high. "3^^^^3" is ofcourse a scale of unimaginability even higher than that. But when a person doesn't understand the difference between "7 billions" and 3^^^3, it seemed dickish of me to ask them to consider the difference between 3^^^3 and 3^^^^3, since those are both unimaginably high. That's the sort of nitpickery that you commit while at the same time accusing *me* of committing.Aris Katsaris (talk) 16:49, 22 June 2013 (UTC)
"You are just claiming as a dogma that no such number exists, rather than argue reasonably why it's so". This is ridiculous. That is up to definitions. You're arguing as a dogma that it has to be linear. Plenty of things are not - no number of copies of a single mediocre work of literature end up equal to a great work of literature. We clearly can gauge worth of a library, and it is such that having more of one book quickly caps out. The linearity of his arises out of the belief that the 'correct' way must be describable mathematically, combined with conceptual poverty in mathematics due to lack of education. WRT living like a hermit or not, **this is for that very specific situation of cryonics / uploading / awesome future life beliefs**, where spending money now increases risk of not surviving into the very awesome and very long future. With regards to "As long as there's no other organization competing for my donations with MIRI in solving these FAI issues", you'd need zero probability of any kind of existential risk that is effectively addressed by a less scammy looking organization. Really easy to imagine something a lot less scammy: Eliezer "Not a Scammer" Yudkowsky had made a lot of money off his commodities trading software, wrote a popular programming language, produced important improvements in computer vision, and is now spending money on averting existential risks. Suppose MWI is true, consider worlds that forked off in 1913, there's worlds where everyone dies of AI, there's worlds that are saved. What fraction of the worlds do you think get saved from doom by something that looks like MIRI? This is ridiculous and stupid. Dmytry (talk)
"You're arguing as a dogma that it has to be linear." Uh, no, it doesn't "have to" be linear. It would be consistent to argue that there's e.g. at least two types of suffering, let's call them Red-Suffering and Blue-Suffering, and torture is Red-Suffering while dustspecks are Blue-Suffering, and to allow any amount of Red-Suffering (no matter how small) is ethically worse than allow any amount of Blue-Suffering (no matter how large). There's nothing inherently inconsistent about a mind that so handles the issue of suffering. But the whole point is THAT THERE EXISTS HERE A BLOODY WORTHWHILE QUESTION TO ASK! Even if we *all* agreed that torture is worse than dustspecks, there'd be a useful followup question to ask about *what* are the criteria that qualitatively distinguish Red-Suffering (suffering like torture) from Blue-Suffering (suffering like dust specks). Aris Katsaris (talk) 19:19, 22 June 2013 (UTC)
And perhaps answer differently yes, according to different intuitions and different understandings, sure! There's *lots* of respectable answers to the dustspecks-vs-torture dilemma. But the one reply that isn't respectable is your own (and your kind's) anti-intellectual attempts of shutting down and dismissing discussion altogether. And this is what I mean by the difference in ethos. The LessWrong forum doesn't *agree* about what is the proper answer to dust specks-vs-torture dilemma -- people have different answers, and I actually think that my own position is in the minority. But we acknowledge it as something to discuss. You don't. You mock and dismiss half of us (including me) on this issue as being supposedly "insane", and therefore you're of negative usefulness to any discussion. It's not that you *disagree* with LessWrong, you aren't willing to engage with the arguments of LessWrong. Aris Katsaris (talk) 19:19, 22 June 2013 (UTC)
First off, I do not even think the dustspecks are a good example of insanity (however when combined with messiah complex and transhumanism the cumulative craziness becomes actionable), I was merely responding to your critique with regards to treating 3^^^3 as billions. Secondarily, Yudkowsky's the one believing there's a correct answer to such a question, and attributing other answers to scope insensitivity. With regards to the ethos you are right here arguing by accusation, early you have been trying to mock my joke about 3^^^3 * 10^-9 billions, while misinterpreting and then not engaging at all with an actual argument regarding giving your money to probable scams on basis of lack of alternatives. I guarantee that Yudkowsky wishes he could have been my hypothetical Eliezer "Not A Scammer" Yudkowsky and succeeded at what he tried. Rest of his crap (MWI and the like) are about as successful at being correct as his programming language is at being an actual programming language, but it is more of a matter of opinion because those are graded by people rather than by machine that knows no excuses and reads no sense into nonsense. edit: and given that Yudkowsky is the highest paid MIRI employee, yes, it's relevant to your donations. More so than, say, educational credentials of Ben Goertzel were, back when they employed him to look more reputable. Dmytry (talk) 22:19, 22 June 2013 (UTC)
Your hypothetical "Not A Scammer" version of Yudkowsky sounds more provably capable than Real-Yudkowsky, but in what way does he sound more honest? If such a person existed wouldn't you even be saying "it's just another commercial venture for him like the ones he made his mega-bucks on" or perhaps "oh, we're supposed to suddenly believe he's suddenly interested in Transhumanism and the Singularity? If he truly believes in the Singularity being a real thing, wouldn't he have been spending the last few years managing the SL4 Singularitarian list, rather than making mega bucks."? That's what I imagine you saying in that scenario. And since we're on the subject of mega-bucks, Peter Thiel has donated hundreds of thousands of dollars to MIRI, how do you explain that *he* seems to consider it a good investment of his money? Doesn't he see the same sign of scam-artistry that you seem to find so obvious? Aris Katsaris (talk) 23:57, 22 June 2013 (UTC)

The "Not A Scammer" Yudkowsky is wasting his own money, not making megabucks. Why would I be saying that, anyway? I hate particular type of person which Yudkowsky is and hypothetical Yudkowsky isn't. And if I were to say that Yudkowsky should of been wasting his time on an obscure mailing list of wankers instead of making money for his cause, I would have been a lot sillier than I am to reject Yudkowsky, and you would have been a lot saner for donating. With regards to Thiel, he has also recently put 15 millions dollars into an AI startup that is to imitate mammalian brain with no talk of friendliness what so ever. Hundreds thousands dollars seem huge until you consider that it is a microscopic fraction of Thiel's wealth and an amount orders of magnitude below the amount he tends to put into a good investment. Not that 1 truly devoted rich backer would have been evidence enough to put it above scientology which has many rich backers. Dmytry (talk) 07:18, 23 June 2013 (UTC)

"regards to Thiel, he has also recently put 15 millions dollars into an AI startup that is to imitate mammalian brain with no talk of friendliness what so ever." May I have a reference for that? At http://thielfoundation.org/project/ I only see MIRI and an anti-aging institute listed under "Science and technology". Thanks. (EDIT: Nevermind, I found it at http://gigaom.com/2012/08/21/vicarious-gets-15m-to-search-for-the-key-to-artificial-intelligence/ ) Aris Katsaris (talk) 14:41, 23 June 2013 (UTC)
I also have a second question for you. In the hypothetical counterfactual world where you accepted Yudkowsky's arguments about the dangers of AI as valid, but still believed that Yudkowsky was a crook (in short, he made true arguments that he happened to not believe in), what institute would you donate your money to for the purpose of the study of AI-friendliness and existential risks? Something like http://cser.org/ (Centre for the Study of Existential Risk) perhaps? Aris Katsaris (talk) 14:41, 23 June 2013 (UTC)
Note that x-risks raise utility of money in the bank account as well (in the event of a future organization with some shot at success). If you assume low probability of emergence of such organization proportionally lowers the probability that any existing organization is genuine, as well. Another thing is that Yudkowsky is not merely a crook, he's not very bright as well. Dmytry (talk) 16:45, 23 June 2013 (UTC)
This is a non-answer, so I have to assume you have no answer to give. And haven't you yet understood that your repeated insults towards Yudkowsky have long reached the point where they achieve negative results if your attempt is convincing me? They just make you look utterly obsessed and utterly incapable of any sort of objectivity regarding the issue. Yes, you believe Yudkowsky to be at the same time stupid, a crook, and a megalomaniac messianic madman. Whatever. I also get that you're utterly incapable of discussing anything without bringing up these insults, which is evidence against the need to take you seriously. (A fanatic is someone who can't change his mind and won't change the subject). You've proven incapable of even changing the subject. Aris Katsaris (talk) 16:50, 23 June 2013 (UTC)
First of, your question included a proposition about Yudkowsky, so it doesn't necessitate change of subject as much as you seem to think it does, secondarily discussing MIRI without discussing Yudkowsky is like discussing scientology in it's early days without discussing Ron L. Hubbard, and thirdly, I do answer your question: re-calculate utility of money in your bank account, given a strategy of donating it to a future x-risk organization that is exceptional. Centre for the study of existential risk is better by far than MIRI but still completely mediocre and worthless. Dmytry (talk) 08:33, 25 June 2013 (UTC)

Of morality and dustspecks[edit]

I don't care to try to answer this particular question so much as say that it exemplifies the reasons I don't find LW to be very enlightening on a range of issues, especially ethics. As we all know, "politics is the mindkiller," so we're stuck debating extremely abstruse, hypothetical problems like this one. I don't mean to deny the importance of thought experiments or abstraction, but they all start to look very irrelevant when huge ethical problems are cordoned off from discussion. Nebuchadnezzar (talk) 22:37, 22 June 2013 (UTC)

Yeah, and it's not even a good debate of such abstruse problems. Basically they try to express their moral intuitions using their very limited mathematical vocabulary, lose a lot in the translation, and then proclaim ridiculous consequences of bad translation to be the correct answers, mostly just to go on how other answers must be caused by scope insensitivity or the like. One could choose a far less mindkilling example as well - suppose we have a library stored digitally on some hard drives. We can evaluate worth of the whole library. We do value books and we even value extra copies of a book (they can allow quicker access, or improve reliability), but up to a point - no number of copies of Battlefield Earth will add up to the loss due to entirely losing Hamlet even if we are to assign copies of Battlefield Earth positive worth, and there's nothing incoherent about that. Dmytry (talk) 07:40, 23 June 2013 (UTC)
Utility isn't a limited mathematical tool. For any set of actions you can map to the reals utility can be used to produce any ordering. If utility was such a bad abstraction then money wouldn't work out. Of course some people find it so hard to understand that multiplying something greater than zero with an arbitrarily large number can produce an arbitrarily large number, or at least can't resist adding some unknowns if their intuition betrayed them. --83.84.137.22 (talk) 19:22, 23 June 2013 (UTC)
U(A,B) need not be equal to U(A)+U(B), and it generally is not (when A and B are two instances of one book, for example). Studying physics, one gets accustomed to linearity being an approximation only valid for small values. Likewise, U(A,A,A...) where A is repeated N times needs not be equal to U(A)*N . The dis-utility of the world where one person is being tortured can be consistently larger than the dis-utility of the world where any or infinite number of people get a minimal unit of pain once. Furthermore, if you make U(A,B) equal to U(A)+U(B) this gives positive utility for changing it to A,A or B,B, which ever is largest. I.e. replacing everyone with the happiest being. edit: ohh, and yes, utility is not a limited tool, it's indeed general enough to encompass choice of dustspecks just as well. Dmytry (talk) 09:12, 25 June 2013 (UTC)
Nothing of that is even included in the concept of w:utility. --85.76.43.30 (talk) 11:45, 26 June 2013 (UTC)
I ignored this analogy of yours by itself the first time I saw you speak it, because frankly it's a bad analogy. Instead I attempted to steelman it with my "Red-Suffering" and "Blue-Suffering" counterproposal. Now you repeat it again as if it's supposed to be a good argument? Let me address it by itself then: It's clearly obvious why utility of an additional copy of the same book decreases in utility compared to the original, and also compared to the first few copies. It's not at all obvious why calculating suffering across people can not be done linearly, so that 2 people suffering X should be indeed exactly twice as bad as 1 person suffering X. This is an obvious difference between your analogy and the example of the dustspecks-vs-torture, that at best you need to address if you think that your analogy works as an argument. Aris Katsaris (talk) 10:39, 23 June 2013 (UTC)
The argument with dustspecks is that those who opt for dustspecks suffer from scope insensitivity and are inconsistent and otherwise stupid. The library provides example that there's nothing inconsistent about that in general. Furthermore, for an advanced society with copying, the library is a direct analogy because people can be copied and/or run on multiple redundant hardware. If I were you, I would be quite a bit more worried to be funding AI research effort by people with such poor grasp of mathematics and tendency to change their moral views in light of what "math" says. The doomsday in question doesn't fall out of the sky, it is a product of badly done AI research, and to decrease risks, you need to fund above-average research, and what you're doing is funding well below average ethical musings. Dmytry (talk) 16:45, 23 June 2013 (UTC)
"The argument with dustspecks is that those who opt for dustspecks suffer from scope insensitivity and are inconsistent and otherwise stupid." There's two different questions, (1) what is the right choice in regards to dustspecks-vs-torture, and (2) if there is a right choice, what is the reason that some people get it wrong. Yudkowsky's answer to the second is "scope insensitivity". Your answer to the second is that those of us with different opinions are mad, ridiculous, insane, stupid beyond belief, etc, etc, etc. I don't see how that makes you any more charitable than Yudkowsky. And "If I were you"? You're not me. For starters you believe me a "little piece of shit" and I believe you filled to be filled with fanatic malice and seething hatred. Secondly, you're positively incapable of discussing any of these issues without discussing Yudkowsky, which makes you useless. Aris Katsaris (talk) 17:08, 23 June 2013 (UTC)
"Your answer to the second is that those of us with different opinions are mad, ridiculous, insane, stupid beyond belief, etc, etc, etc." Is it that general, really? Or is it just that I believe certain quite specific different opinions that you seem to hold very dear to be mad, ridiculous, insane, and stupid beyond belief? As many other people do. I didn't start the thread "is Yudkowsky insane" on basis of his dustspecks answer itself, remember? (Albeit there's a lot of insane reasoning one can use this answer in). What I find ridiculous is the belief that there's some mathematically derived correct answer here, not the particular choice of the answer, and I indeed did believe you to be a little piece of shit on basis of you going on with false accusations that I am lying (which you opted for a third party to arbitrate, no less, and the third party agreed with me, which resulted in donation of 100 euros to GiveWell's top charity on your behalf). Though now I think you got exploited for money and now you strive to avoid seeing yourself as a sucker. There isn't really middle ground where it is a minor mistake and hence you'll likely stay donating to this ridiculous organization for a while. Dmytry (talk) 08:54, 25 June 2013 (UTC)
"As many other people do." Many other people are religious idiots, and think atheism to be mad ridiculous and stupid. The issue is that there's a very high correlation between people who can actually use *arguments* and the people who *don't* consider them ridiculous and stupid. "What I find ridiculous is the belief that there's some mathematically derived correct answer here". Yes, hur, hur, we nerds believe mathematics to be relevant to real life, look at the nerds be all mathy and stuff. This is why I consider you an anti-intellectual neanderthal. I still stand by my "false accusations", btw -- you've repeatedly shown utter disregards for the truth whenever you think you can get away with it, and I've known your falsehoods about my own words personally. The third party (David Gerard) claimed you honest -- and I realized I should not have thought that your lies were so obvious to everyone as they were to me, but this doesn't mean said lies didn't remain obvious to *me*. Aris Katsaris (talk) 09:25, 25 June 2013 (UTC)
Also since the last time I donated to MIRI, back in January or December, I've received more and more confirmation that it was a good choice (given their large research output in recent months), and I've received more and more confirmation that most of its critics are utter bozos, especially its critics in rationalwiki (given how they revert even obvious factual corrections by other rationalwiki-ers) -- so no worries here, the way it's going MIRI is sure to receive even more money from me this year than they received last year. If you had a a good argument against MIRI or Yudkowsky, you'd not be reduced to the crappy arguments you've been constantly making, Dmytry! The more crappy arguments you make, the more evidence it is you don't have any good ones. Aris Katsaris (talk) 09:25, 25 June 2013 (UTC)
I think "research output" is generally measured in peer-reviewed paper output. By this mark, MIRI/SI has no research output for its entire existence, though I believe they may currently have a single paper under review. I'm amazed that the donors don't lynch Yudkowsky for spending money that was ostensibly donated for research WRITING HARRY POTTER FANFIC AND BLOG POSTS. The majority of SI/MIRI's output has been outside its supposedly core activit. --68.7.144.205 (talk) 22:45, 26 June 2013 (UTC)
You know, I have myself been successful in passing peer review with an uninspired and all-around useless paper - and I'd judge the utility of any one *paragraph* of MIRI's non-peer-reviewed work as greater than all of *my* peer-reviewed paper put together ("peer-review" may not be a very good filter for quality, you see). And speaking as a donor, I came across LessWrong via that Harry Potter fanfic, and so did atleast 22% of LessWrong's readers. It's been purposefully used as an advertising method for the community, which explains our lack of outrage. :-) Aris Katsaris (talk) 10:48, 27 June 2013 (UTC)
You're seriously arguing "well, I've got shitty papers published myself, so papers must be worthless"?
From the outside view, MIRI's achievements are minimal - they've achieved bugger-all. You consider that just fine, but that doesn't make anyone noticing it an idiot for disagreeing with you - David Gerard (talk) 12:53, 27 June 2013 (UTC)
"You're seriously arguing 'well, I've got shitty papers published myself, so papers must be worthless'"? Hardly as strong a claim as that. I'm just arguing that you should care about the quality of a paper more than about whether it is "peer-reviewed" or not. Aris Katsaris (talk) 14:18, 27 June 2013 (UTC)
"From the outside view, MIRI's achievements are minimal" True - their achievements are nothing but a tiny first small step in what I personally consider the right direction and might come to something or to nothing decades down the line. But what great things has e.g. Rationalwiki achieved? And don't worry, my criteria for calling someone an idiot are stricter than their opinion on MIRI. e.g. even unrelated to his opinion of MIRI, shouldn't I feel entitled to consider someone like "Hipocrite" an idiot based solely on the content of his communication with me on this page? Or do you really believe that he addressed my points intelligently? (And I've never called *you* an idiot for example, I consider you to be about my own level of intelligence, based on ability to cogently discuss, though I don't know if you judge the same about me. And certainly if you actually claim that Hipocrite spoke intelligently, my opinion of your intelligence will take a nosedive.). Aris Katsaris (talk) 14:18, 27 June 2013 (UTC)

──────────────────────────────────────────────────────────────────────────────────────────────────── "I came across LessWrong via that Harry Potter fanfic ... It's been purposefully used as an advertising method for the community". Indeed. But you can hardly argue that harry potter fanfic would be a pinnacle of what a bunch of actually technically capable and bright people would produce in over a decade to grab the attention and gain status. As for comparisons to RW, well, RW is not a research outfit, so we would have to look at the individuals. I'm pretty sure if we do head count of degrees, RW comes ahead - there's at least 2 people here that actually did research in AI, even. And for myself, I have developed: various non-realtime 3D graphics software, a fairly profitable video game, and am #10 on impressive marathon debuts on TopCoder, of all time. I'm currently working as an independent contractor for Max Planck Institute for Neurology. For a hobby, I am currently building various electronic devices; I am quite inexperienced in that but "hack-a-day", a popular electronics website, described my most recent toy as "[Dmytry]‘s MicroGeiger prototype is one of the smallest and most useful we’ve seen. [of homemade radiation detectors]". If I were to claim to be doing world saving research, my followers, while they could have been met with derision (because frankly it'd still be quite ridiculous), they would not look nearly as stupid as would have been the case if I never finished any technical projects. Dmytry (talk) 15:34, 28 June 2013 (UTC)

Ugh, I hoped I would be ending my participation in this page/thread. Apologies for breaking said promise, and I'll be going again now. Think what you want of me. Aris Katsaris (talk) 14:23, 27 June 2013 (UTC)
So I, a person who among other things gets paid to do math for a living, am an anti-intellectual here for not believing that you can mathematically derive correct answers to moral dilemmas of this kind. Nice. Enjoy saving the world, and enjoy becoming more and more sure that all critics of LW are anti-intellectual bozos because you seem to be able to reach such conclusion from anything. What do you think about Greg Egan? Another anti-intellectual math hating bozo I guess? Dmytry (talk) 09:37, 25 June 2013 (UTC)
If he ever calls me a "cultist" or "acolyte" or whatever, or if he ever treats 3^^^3 as if it's 7 billion, I'll be sure to call Greg Egan a bozo too. Aris Katsaris (talk) 10:17, 25 June 2013 (UTC)
Nebuchadnezzar, it's not the entirety of ethics that is cordoned off, it's just politics itself that is discouraged -- and don't you have all the rest of the Internet you can use to discuss politics in? There are individual LessWrong members who discuss politics frequently (e.g Yvain's blog has recently been among my top favorites in this regard), they just don't do in the LessWrong forum itself. Aris Katsaris (talk) 10:39, 23 June 2013 (UTC)
You can't discuss meaningful, concrete problems with the real world without touching politics. Perhaps it's for the best, though, because my attempts to discuss real-world issues made me discover that LWers often held surprisingly conservative views for a supposedly "rationalist" community (e.g. my posts on LGBT identity revealed sexist and transphobic beliefs among a significant fraction of the commenters). If you ban all politics, you're left with discussing matters so abstract that they end up completely detached from reality. - LucidFox (talk) 11:34, 24 June 2013 (UTC)
Politics aren't really "banned", just vaguely discouraged. And I don't know what specific incident you have in mind so can't specifically comment on it. My own perspective on this is that despite LessWrong being accused of "groupthink" hereabouts, a problem people seem to have is how *tolerant* LessWrong is of all coherently stated opinions; and people that hate LessWrong of course tends to focus their attentions on those opinions they don't themselves share. According to the last survey LessWrong was something like 35% liberal, 30% libertarian, 25% socialist, 3% conservative and it has a couple neoreactionary voices in there too -- because of that, I have recently seen it accused by different people of being communist or being the libertarian puppets of Peter Thiel or of being sexist/racist or alternately of having drunk the progressivist/egalitarian/feminist koolaid. Frankly LessWrong as a whole can't be all those things at the same time, though obviously different individuals members can. Aris Katsaris (talk)
"If you ban all politics, you're left with discussing matters so abstract that they end up completely detached from reality." Isn't that pretty much what they do? They are contemplating Skynet's takeover and have nightmares about robot hell.--Baloney Detection (talk) 22:43, 24 June 2013 (UTC)
Baloney Detection, completely detached from reality is when you accused AD of being a Yudkowsky drone and reverted the pretty easy-to-verify factual corrections he was making. If you want to give a sign of caring about reality, the easiest first step would be to not deliberately insert falsehoods. A second step would be to correct the falsehoods reverted-back-into-the-article by you and Nebuchadnezzar.Aris Katsaris (talk) 23:57, 24 June 2013 (UTC)
I admit claiming AD to be a Yud drone was inaccurate. What falsehood have I and Neb added to the article? When will you admit that Yud is wrong about science vs Bayes?--Baloney Detection (talk) 10:22, 25 June 2013 (UTC)
Gee, the falsehood about "requiredism" for once, which you two kept adding back after AD removed it? "When will you admit that Yud is wrong about science vs Bayes?" You're seriously asking that? When I'm convinced he was wrong. The entirety of your counterargument was to claim "category error" which indicates to me you even failed to understand what the comparison was even about. Aris Katsaris (talk) 10:41, 25 June 2013 (UTC)
Requiredism was not a falsehood. "When I'm convinced he was wrong" Ok, what would convince you that he is wrong?--Baloney Detection (talk) 15:02, 25 June 2013 (UTC)
It's a blatant falsehood that is obvious for any honest person to see. Your stubborness on this point reveals the fundamental lack of interest you have about honestly representing facts. But be my guest and keep it like this because it's a VERY clear falsehood and the longer it remains there the longer it'll serve its usefulness in revealing the moral bankruptcy of rationalwiki. "what would convince you that he is wrong?" If you want to convince me that they're equal, have someone recreate Nate Silver's success of predicting future election results by the process of formulating and rejecting/provisioanlly accepting hypotheses after testing (namely via "Science"), rather than via Bayes. If you want to convince me that science is superior, have someone predict an election result (using the scientific method) that Nate Silver failed to predict using Bayes. And if you argue like Nebuchadnezzar that science is an abstraction that can include Bayes, then I'll straight out say that this means you never even read the actual article "The Dilemma: Science or Bayes?" which very clearly refers to science explicitly in the sense of the specific "traditional training" of science that has come down (hypothesis, prediction, experimentation, etc), not according to some Platonic abstraction which can fit anything within it. In the sense that the word science is used in that article they're VERY easily comparable, there's no different categories involved, and again you reveal disinterest in reality if you pretend differently. Aris Katsaris (talk) 15:25, 25 June 2013 (UTC)
Let me make sure I understand - if we don't ban this one person from our website, we are morally bankrupt? Let's count pedos at Less Wrong! Hipocrite (talk) 16:07, 25 June 2013 (UTC)
Who said anything about banning anyone? I'm talking about removing the *falsehoods*, which any of you can do, but the majority of you don't care. Aris Katsaris (talk)

──────────────────────────────────────────────────────────────────────────────────────────────────── Your word salad of uneducated bloviating about things you don't understand made my eyes glaze over. What specifically do you want changed? Hipocrite (talk) 16:27, 25 June 2013 (UTC)

So basically you're saying "I don't understand you, therefore you must be stupid." ? That's another nice summary of the rationalwiki ethos. The change in question has already been discussed by me in a previous talk section, and implemented by AD; and then twice reverted first by Baloney and then by Nebu. Use your l33t mouse-clicking skills to look up the history of the page. But that's just one example. The disinterest in factual accuracy runs deep. Another simple example of falsehood would be the claim about "total utilitarianism". A third false claim would be that "A disengagement from the practical" is supposedly explicitly affirmed. But these are all just symptoms of the disease. Aris Katsaris (talk)
And btw, I think I'm fully done with this discussion. Feel free to continue your lies and insults undisturbed. You're sure you're on the right after all, therefore you don't see anything wrong to use any means, including lies, in your fight against the enemy, right? Aris Katsaris (talk) 16:59, 25 June 2013 (UTC)
So, to summarize "blah blah blah I won't tell you what I want changed." Ok then! Hipocrite (talk) 17:07, 25 June 2013 (UTC)
Ironically the change that is going to happen is that people get annoyed enough to hunt down far worse instances of Yudkowsky proudly proclaiming ignorance of philosophy. Also, by the way, ignorance doesn't imply independent reinvention - for me what particularly stands out is his "Beyond the reach of God", which is a bad presentation of a very awesome idea, looks like reinvention, except that idea is well presented in Greg Egan's Permutation City, which Yudkowsky very likely have read, and in a way dates back to Plato. Dmytry (talk) 20:24, 25 June 2013 (UTC)
LOL, yes, as a last note: Yudkowsky is indeed very likely to have read the book he openly refers to as one of "the greatest hard-SF book ever written", openly lists in his favourite books list, and has even written fanfiction about. Sayonara! Aris Katsaris (talk) 05:40, 26 June 2013 (UTC)
What are you excited about here? "Beyond the reach of God" doesn't mention it in the slightest. The point being that his seeming ignorance of philosophy combined with his philosophical musings doesn't imply that he's actually reinventing any philosophy. Dmytry (talk) 07:22, 26 June 2013 (UTC)
Well you see, it's a bad idea to get your information about "traditional" science from Yudkowsky. He is clueless on the subject.--Baloney Detection (talk) 16:54, 25 June 2013 (UTC)
"...very clearly refers to science explicitly in the sense of the specific "traditional training" of science that has come down (hypothesis, prediction, experimentation, etc)..." So you missed my entire point, which was that science does not actually operate according to this "traditional training." Nebuchadnezzar (talk) 21:57, 25 June 2013 (UTC)
But if Yudkowsky says so then it must be true! ^^--Baloney Detection (talk) 14:26, 27 June 2013 (UTC)

Luke Muehlhauser is a world class jerk[edit]

I catched the following LW post from another SB thread on LW (well worth reading, including a littel discussion about the ridiculously overrated Yvain). Luke Muehlhauser wrote: "So I broke up with Alice over a long conversation that included an hour-long primer on evolutionary psychology in which I explained how natural selection had built me to be attracted to certain features that she lacked. I thought she would appreciate this because she had previously expressed admiration for detailed honesty. Now I realize that there's hardly a more damaging way to break up with someone. She asked that I kindly never speak to her again, and I can't blame her." What the actual fuck?--Baloney Detection (talk) 16:41, 30 July 2013 (UTC)

Are you seriously excerpting the "before" part of an essay on self-improvement as evidence of how someone is a jerk? That's like proving a dieter is still fat by showing an old photo.--ADtalkModerator 13:39, 1 August 2013 (UTC)
Now that he "improved" himself, next time he'll be "improving his relationship" to be less "sub-par" he'll be able to keep old girlfriend as a fallback and avoid becoming celibate for a few years. Dmytry (talk) 13:27, 2 August 2013 (UTC)
That is silly - that was the "look at what an asshole I was" example - David Gerard (talk) 20:38, 2 August 2013 (UTC)
Did you read the whole thing? The rest of the article proceeds to PUA and him basically describing how much better at manipulation he is now, it's not about morality or moral improvement. Dmytry (talk) 04:53, 3 August 2013 (UTC)
Check out "Colour My Natural Selection" on youtube. Just sayin'. User:tuttlemsm 6:07, 3 August 2013 (CST)

Response to this article[edit]

If I put up a response to this article on the LW wiki, will you guys link to it from the bottom?--96.24.72.43 (talk) 07:42, 10 June 2013 (UTC)

Shouldn't think so. Such a self obsessed crowd I have never known. Scream!! (talk) 07:46, 10 June 2013 (UTC)
I think we'd probably link it for the LOLZ, to be honest. Hipocrite (talk) 11:43, 10 June 2013 (UTC)
Personally, I don't think any criticism spoken in the Rationalwiki page is of such quality that it deserves a response in the LW wiki. To mention a simple example, let's say that LW responds to RationalWiki that RW's prime example of how we supposedly use new jargon to label old concepts (supposedly "requiredism" instead of "compatibilism") has only ever actually been used in LessWrong a single time and then only for the *explicit* purpose of *contrasting* it to "compatibilism" (rather than because of the supposed ignorance of the term "compatibilism" which RationalWiki implies). At best this will cause RW to remove a single falsehood, and the actual problem (being that most editors -- with some few bright exceptions -- lack interest in a fair presentation of the subject) would remain intact. But you can't fix "not caring" with responses; because they don't care. Aris Katsaris (talk) 13:45, 11 June 2013 (UTC)
You're right, that's a bad example of that sort of behavior on LW. Thank you for pointing it out.--ADtalkModerator 19:54, 16 June 2013 (UTC)
One could just go through their jargon and look up mainstream terms. Also Yudkowsky professed pride in his ignorance of philosophy on multiple occasions (he's trying to position himself as a technical guy rather than a philosopher). Dmytry (talk) 18:30, 21 June 2013 (UTC)
Well, you can always check out [this very long|http://forums.spacebattles.com/threads/less-wrong.245730/] discussion about LW. Not at all kind to it.--Baloney Detection (talk) 19:16, 16 June 2013 (UTC)
Add to that, this entry contains Roko's basilisk, which you are not allowed to discuss on LW.--Baloney Detection (talk) 20:15, 16 June 2013 (UTC)

Late to this discussion, but sure, of course we could add the link at the bottom. ħumanUser talk:Human 05:56, 6 August 2013 (UTC)

"Disengagement from the practical..."[edit]

I would like to change the following sentences in the criticism section from,

A disengagement from the practical, beyond self-improvement, is another feature of LessWrong's culture, explicitly and strongly affirmed.[24] This refusal to delve into contemporary politics or policy is held up as laudable, because it is seen as a way to preserve objective rationality.

to

LessWrong is mainly concerned with achieving accurate beliefs about the world, rather than achieving goals. The refusal to delve into contemporary politics or policy is held up as laudable, because it is seen as a way to preserve objective rationality.[24]

I don't think that the the second sentence supports the first sentence, in the original phrasing. The current phrasing seems to equate politics with instrumental rationality. Which seems wrong? It is however true that LessWrong is mainly concerned with epistemic rationality, and that a lot of ideas being discussed are detached from daily life and only relevant from a philosophical point of view.

What do you think? Do you agree with the new phrasing? XiXiDu (talk) 14:32, 6 August 2013 (UTC)

I think the grammar in this article is mostly way too convoluted and untangling it like this would be good - David Gerard (talk) 15:03, 6 August 2013 (UTC)