Talk:LessWrong/Archive3

From RationalWiki
Jump to navigation Jump to search

This is an archive page, last updated 3 May 2016. Please do not make edits to this page.
Archives for this talk page: , (new)(back)

Bayesianism as opposed to Frequentism[edit]

What's the hell is up with that local trope? I don't get it. You get Bayes rule from frequentist reasoning, that's how they derive Bayes rule themselves. I've heard it said that frequentism is somehow "bad epistemology" , what ever that means. The whole thing looks like a case of horrible fuzzy confusion of philosophy with mathematics.— Unsigned, by: Dmytry / talk / contribs

Both frequentism and Bayesianism are useful approaches in different circumstances — choose the right tool for the job. The idea that there is a single all-round best approach to reasoning is silly in my opinion. (((Zack Martin))) 11:21, 9 June 2012 (UTC)
Shut up, you bloviating fuckwit. Your statement says nothing, provides no predictive power and is a perfect example of extrapolation from personal ignorance - David Gerard (talk) 12:17, 9 June 2012 (UTC)
If I'm playing a dice game, and want to calculate the odds of some particular throw, frequentist methods are useful, Bayes rule isn't really useful. As I said, use the right tool for the job. (((Zack Martin))) 12:52, 9 June 2012 (UTC)
Yudkowsky got that trope straight out of Jaynes. Modern Bayesian textbooks (e.g. the one I'm ploughing through in fits and starts) point out that Bayes is a theorem, hence the mathematical facts; that the entire frequentist cookbook can be derived by Bayesian means; and that the big problem with pure Bayes is knowing what the hell your prior actually is in the first place. Bayes as ideal, frequentist cookbook as practical methods - David Gerard (talk) 12:17, 9 June 2012 (UTC)
Well it sounds to me like 'deriving' fluid dynamics from Archimedes principle. If you have classical laws of physics, you can derive probabilities in any dice experiment, along with deriving the Bayes rule, by taking the probability as limit of the frequency (and counting over the observations that ended up compatible with knowledge). The priors are your freedom-to-be-wrong when all you care for is not getting dutch-booked (being internally consistent), but do not care to be correct about any system. You don't get to choose arbitrary values when you want to be correct about physical system, but you do end up with a plenty of arbitrary values that you can set to be anything you want when you only care for some relations between your predictions. Dmytry (talk) 15:10, 9 June 2012 (UTC)
I know Bayes rule is often called a "theorem", but I'm not sure that is really appropriate. Attempts to demonstrate it run into difficulty, and many people seem not to bother. Jaynes attempt to justify Bayes rule follows Cox's approach, which has problems. (((Zack Martin))) 12:52, 9 June 2012 (UTC)

Archive?[edit]

I think we should archive much of this talk page, given its length--il'Dictator Mikal 20:20, 10 June 2012 (UTC)

It shall be done. Peter Blessed are the cheesemakers 23:18, 10 June 2012 (UTC)
It has been done. Star of David.png Radioactive afikomen Please ignore all my awful pre-2014 comments. 00:53, 11 June 2012 (UTC)

Reactions to LW elsewhere[edit]

Some random forum's reaction to LW: http://www.poe-news.com/forums/sp.php?pi=1002430709 http://www.poe-news.com/forums/sp.php?pi=1002405574 The next time they complain rationalwiki is bad, you can link this...

Kinda fascinating. Though from which perspective do they come from? Do they just throw dirt at everything and everyone on the Internet?--Baloney Detection (talk) 19:57, 13 June 2012 (UTC)
No idea... seems like a fairly normal forum, making fun of conservapedia and liking rationalwiki : http://www.poe-news.com/forums/sp.php?pi=1002364960 . Also in LW thread someone explained interferometer mistake. If i search for random bashing I tend to find the stuff that well deserves bashing, like http://www.poe-news.com/forums/spshort.php?pi=1002307636&ti=1002307636 , not Justin Bieber/Lady Gaga/etc bashing. Dmytry (talk) 09:00, 14 June 2012 (UTC)

LessWrongian misunderstandings of science[edit]

Part of the goal of RW is to:

  • Analyzing and refuting pseudoscience and the anti-science movement.
  • Documenting the full range of crank ideas.

Given this, what misunderstandings of science does Yudkowsky (and thus most LessWrongians) have that you can think of? What runs through my mind is that Yudkowsky and most of his acolytes believe that science and Bayes' theorem are in conflict with each other.

What LessWrongians also for the most part don't get is that you still have to perform the experiment. You can't simply gloss it through with a naive application of Bayes' theorem.--Baloney Detection (talk) 17:51, 4 June 2012 (UTC)

Well, frequent misuse of the terms like Solomonoff induction and Kolmogorov complexity would be an example of pseudo-mathematics (as would most references to Bayes and Bayesian something). They just use math as technobabble. In the cognitive science they promote the view that human mental function can be replicated with very very little computational power compared to what is commonly believed to be the lower bound (i.e. basically they promote the views of most optimistic AI researchers of the 1950s). The pseudoscience is not about incorrect predictions or incorrect theories; the pseudoscience is about incorrect process. What they promote is an incredibly sloppy process of reasoning. They promote the very idea of believing in results of sloppy thought on terms you poorly understand. Back when I was still trying to contribute constructively, I would speak against the sloppiness: http://lesswrong.com/lw/ai9/how_do_you_notice_when_youre_rationalizing/5y3w . Note that the edit only added expanded explanation when it was at -1 or -2. This was at -7 or -8 peak, before someone wondered why. You could say that Eliezer is a moron and get fewer downvotes. This illustrates what they are against: the notion that you need rigorous/careful reasoning to be correct. They believe some fuzzy ill defined 'rationality' suffices (and not only suffices, but is superior). And also they don't like experiments. Dmytry (talk) 06:39, 5 June 2012 (UTC)
This post makes me question how well Yudkowsky really understands probability theory. My conclusion is, not very well. So what if "Positive and negative infinity are not integers"? Yeah, they aren't integers, but they are affinely extended integers. So what if odds ratio gives positive infinity for 1? We need not limit ourselves to the reals; we can use extended reals instead. Same for his silly log odds argument. "Using the log odds exposes the fact that reaching infinite certainty requires infinitely strong evidence, just as infinite absurdity requires infinitely strong counterevidence." So? What's wrong with the idea we have infinitely strong evidence for some propositions? How strong is the evidence for 1+1=2 - infinitely strong or only finitely strong? If only finitely, can you quantify it? I think not - which supports the position that we have infinite evidence for this proposition. Yudkwosky needs to say, what is the probability that Bayes Theorem is true? And should Bayes Theorem not be modified to account for the non-zero probability that it is wrong? (((Zack Martin))) 12:47, 5 June 2012 (UTC)
In addition, I would say that they have a poor understanding of human capabilities and intelligence in general. Not woo-like 'everybody is a genius,' just the normal sort of capabilities we can see from observation of humans... any human. Although yes, people are more than capable of being stupid, the biggest idiot in the world is still very intelligent. Human perception and analysis is very very complex and very nuanced and we're capable of all sorts of amazing things that we take for granted. Visual recognition and pattern recognition are ridiculous in the natural world even for the proverbial left pointy end of our bellcuve, for example. We have our glitches and we have societal pressures that change our conclusions (don't get me STARTED on their dismissal of social sciences!) but on the whole we're pretty impressive as far as animal life on this planet goes. The belief that humans are naturally 'stupid' (but not them; they think in the magic golden formulae) I find is often used as a strange hubris, and causes the crux of their science fail. The belief that mastering human-level intelligence in an AI is attainable within the century, the idea that they don't have to test things, the dismissal of non-secret-clubhouse-people and their perhaps well-supported conclusions or objections... they all spring from the 'humans are dumb' falsehood that supports the massive egos of at least a few there. It just takes one bad idea to give rise to a host of bad ideas. Just one concept. Even in otherwise smart people. ±Knightoftldrsig.pngKnightOfTL;DRlavishly loquacious 12:55, 5 June 2012 (UTC)
"don't get me STARTED on their dismissal of social sciences!" But evo psych is a social science...right? Nebuchadnezzar (talk) 16:34, 5 June 2012 (UTC)
I find they are often all-too willing to use evolutionary psychology to confirm their position that humans are stupid, but comparatively less willing to address social issues that guide the thoughts of individuals and groups just as much as (if not more!) than genetics do. I usually see much less emphasis on education (how can people understand their philosophy if they aren't actually taught critical thinking skills?), on understanding cultural reasons why people believe what they do. It goes along with the 'politics is the mindkiller' idea; they quite often avoid actual issues that are relevant to their discussion in order to hold their own position. ±Knightoftldrsig.pngKnightOfTL;DRmore at 11 00:44, 6 June 2012 (UTC)
I was being more than half sarcastic -- forgot the dick quotes around "science." The problem is not that they think humans are stupid, but that they think "non-rationalists" are stupid. Either that or they think "rationalist" is not a subset of "human," which I find to be less plausible, though they could be confusing their imagined future selves with their current selves. Nebuchadnezzar (talk) 01:35, 6 June 2012 (UTC)
Kind of both. Obviously not all of them think this, but I always walk away with a taste of 'We have overcome our humanity, we are better than normal humans' even out of the gate, no AI required. Like their rationalism is a hack or a fix for all of the things that come with not only being homo sapiens, but also being a participant in whatever society they individually happen to belong to. Also, sorry... I am not so good at sarcasm on the internet. I had a feeling but then the urge to type happened. :( ±Knightoftldrsig.pngKnightOfTL;DRsufficiently advanced argument still distinguishable from magic 03:50, 6 June 2012 (UTC)
My impression is that they confuse play-pretending to be Spock with gaining powers of correct reasoning. The scientists don't aspire to play-pretend to be Spock and are therefore more wrong. Dmytry (talk) 13:09, 7 June 2012 (UTC)
But look at the results when you shut up and multiply! (The process was: 1. Read LessWrong article. 2. Decide that superior rationality means you can make a bundle just showing up in Australia with a working visa. 3. Be surprised when reality doesn't cooperate. Hopefully they're having decent tourist experiences.) - David Gerard (talk) 22:11, 7 June 2012 (UTC)
Ghahaha... perfectly illustrates my point. The fool haven't done even 1 percent of the work required for declaring a country optimal for employment, he only did very incomplete work with serious rational Vulcan face on. All of the rationality there works like this: Do some insufficient or entirely flawed reasoning with serious Vulcan face on, and expect to be magically better than all the 'non rational' folks. Dmytry (talk) 06:34, 8 June 2012 (UTC)
No, Maratrean, that's you being a bloviating fuckwit again with startling delusions of relevance. What he means is that there is no such thing as philosophical certainty, thus 0 or 1 are not probabilities; this does not rule out practical certainty, where the difference is pretty much epsilon. Your problem with log odds of probability is your problem, they work perfectly well and make the numbers in Bayes' Theorem easier. tl;dr Shut the fuck up. You're a net negative. Just shut the fuck up. - David Gerard (talk) 19:11, 5 June 2012 (UTC)
Ya the article looks ok to me, elementary math. If EY has troubles with statistics, that's at more abstract level of suggesting that people should be doing those 'Bayesian updates' on incomplete graph with cycles where the new nodes are created based on probabilities of existing nodes, and where two nodes that affect probability of 1 node may, in turn, be both affected by third node, which would get double counted unless everything is tracked. Which is BTW where science is 'at odds' with his understanding of Bayes because scientists know math better and know not to risk double counting evidence and circular-updating. Also btw really awesome for accepting only arguments you like: if you like it proclaim you 'update' on it, if you don't like either call it non-bayesian or argue evidence may be double counted. That really helps with beliefs like the AI stuff. Dmytry (talk) 21:35, 5 June 2012 (UTC)
@DavidGerard — the definition of "probability" is not directly connected to philosophical certainty; an event with probability 1.0 is not necessarily philosophically certain, and can even fail to happen — almost surelyWikipedia happening and surely happening are not the same thing. And I have no problem with log odds; if there is anyone here who has a problem with them, it is Yudkowsky not me. (((Zack Martin))) 01:43, 6 June 2012 (UTC)
@Dymytry: Where does he argue that science is "at odds" with Bayesian statistics? I don't see how he can even make that argument, esp. since he is in AI/cog sci where Bayesianism is all the rage these days. Nebuchadnezzar (talk) 01:47, 6 June 2012 (UTC)
I'm not Dymytry, but I'll answer anyways. Here he does argue exactly that: http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/ --Baloney Detection (talk) 08:09, 6 June 2012 (UTC)
He argues against Science (with a capital S), which is a word he's using for the social process of science (which of course has many fucked bits), not against the notion of science itself. (Beware negative halo effect.) - David Gerard (talk) 13:47, 6 June 2012 (UTC)
My impression is that he just doesn't like the fact that science doesn't assign some probabilities to unfalsifiable bullshit which he is full of when he's trying to do something new (such as argue about AI). His idea is that under Bayesianism, people should assign some probabilities to what ever he makes up (or what ever idea he takes from science fiction and presents as serious concern of his). That's obviously faulty because you go from effectively zero probability (which is the probability you have effectively assigned to every hypothesis that you haven't considered), to some nonzero number, which is then either too low and you can be gamed by giving bullshit arguments for true facts so you set them low probabilities, or too high and then you can be gamed with straight bullshit such as AI risk argument of his. Also note the reference to many worlds. He really doesn't understand jack shit about either Kolmogorov complexity or QM. The notion of lowest Kolmogorov complexity of objective model within which you can find yourself, does not favour many worlds. It favours the iterator over all possible physical theories. Or even worse, simple iterator over all numbers (within which you'll find yourself as a sort of Boltzmann brain). You have to discount for size, i.e. you have to discount against huge theories that predict everything at once. The MWI might or might not be most useful interpretation depending to what sort of physics you are doing. Also, his idea of how physicists understand CI is a strawman, he doesn't understand the whole point of why it is called 'interpretation', and as pointed out by others he probably confused phase of 180 with complex number i when doing the interferometer example. Dmytry (talk) 15:02, 6 June 2012 (UTC)
@David Gerard: Did you actually read the article? He did put science and Bayes against each other, and asked which one the reader "pledges allegiance" to. He was strongly criticized for his lack of understanding of science by philosopher of science Massimo Pigliucci ( http://rationallyspeaking.blogspot.se/2010/09/eliezer-yudkowsky-on-bayes-and-science.html ). To my knowledge Yudkowsky has never responded to that, and probably never will (he is usually too full of himself to respond to critics). As we all know, Yudkowsky has several other problems with science when it comes to his pet issues, for example the fact that mainstream science is ambigous to the singularity hypothesis and dismissive of cryonics.--Baloney Detection (talk) 16:30, 7 June 2012 (UTC)

Because editing long sections is hard on my phone[edit]

I have to agree with Mara on this one, that post reeks of does not understand the concept. But the non-zero concept of Bayes thm being incorrect confuses me, unless he is engaging in pointless pyrrhonism again. Pi 3:14 (talk) 02:00, 6 June 2012 (UTC)

No, my point is — if Yudkowsky wants to argue that no proposition has probability 1, then it follows P(Bayes theorem is true)<1.0. So, should not Bayes theorem be made more accurate by taking into account the non-zero probability of its own incorrectness? (I'm not claiming it could be wrong; but if Yudkowsky is to be consistent then he must claim that.) (((Zack Martin))) 02:14, 6 June 2012 (UTC)
Hey Mara, SHUT THE FUCK UP! LongStandingUser (talk) 02:54, 6 June 2012 (UTC)
But then you have to take into account the incorrectness of what you used to assess the correctness of Bayes' theorem, and then the incorrectness of that assessment, and the incorrectness of that... or you can just say that these mathematical statements are proven assuming your formatulation of number theory is correct and that your underlying logic that builds up mathematics is correct. Otherwise, what you're merely saying is that "because you can't be 100% sure, therefore you are actually 0% sure and ANYTHING can happen, right?" The post you have a problem with is a fairly hand-wavy tutorial on philosophical certainty based on analogy to odds ratios. Scarlet A.pngbomination 08:13, 6 June 2012 (UTC)
An easy way out of infinite regresses is to assign certain assumptions a probability of 1. Anyway, part of the problem with Yudkowsky's post is he seems not to understand the relationship between probability theory and philosophical certainty. Probabilities of 1 and 0 are not necessarily philosophical certainties; an event with probability 0 can happen and an event with probability 1 can fail to happen. (((Zack Martin))) 08:19, 6 June 2012 (UTC)
No, if you set a probability to 1 or 0 you get a division by zero, but you're too obsessed with the structure of your own anus to notice you're still talking. The problem is that you are still talking. You have nothing to say on the topic and you are still saying it. Shut the fuck up - David Gerard (talk) 13:04, 6 June 2012 (UTC)
I'm no math expert (in fact I am a math idiot) but I was under the impression that it's not that 'things with probabilities and 0 and 1 happen' it's that in some cases the probability can approach 0 or 1 infinitely but never reach 0 or 1. Meaning that something can be so close to 0 it's pretty much 0. Unless the rules of the cosmos suddenly change. Which may be even less probable. So you might as well relax; for all that it matters to us there may as well be 0 and 1 probability because the alternative is so staggeringly unlikely that it makes 'improbable things happen' look like a coin flip. I could be very wrong, I don't remember where I heard this; if I am, please correct me. This is kind of a plea for clarification because learning stuff is cool and I am a math idiot. ±Knightoftldrsig.pngKnightOfTL;DRlavishly loquacious 13:30, 6 June 2012 (UTC)
Yes, that's the difference between philosophically possible (the egg might unscramble itself) and practically possible (the egg is not going to unscramble itself). If you read anything whatsoever by Maratrean, you will end up less informed than you were when you started, per this - David Gerard (talk) 13:47, 6 June 2012 (UTC)
I was under that impression, yes. Iiinnntteerreessttinnng. I find this stuff way more fascinating than I should, possibly because I'm not naturally great at math and hearing it explained is cool. Except what you said about maratrean. I was pretty aware of the uncoolness of that. :S ±Knightoftldrsig.pngKnightOfTL;DRwalls of text while-u-wait 13:51, 6 June 2012 (UTC)
Talking of mathematical stupidity but how is the probability of throwing a number between 1 and 6 using a standard die not exactly 1. similarly I'd like to hear how the probability of throwing a seven is not zero. Bad Faith (talk) 13:46, 6 June 2012 (UTC)
No one going to mention The Last Hero in answer to this one? Scarlet A.pngd hominem 16:23, 6 June 2012 (UTC)
The atoms in the die have a probability greater than zero (but on the order of ten to the minux so many I don't even have enough numbers for it) of spontaneously rearranging themselves to form a seven on one face. This is of course not going to happen, and for conventional uses you can ignore it. The trouble is that humans are very bad at ignoring it (as Yudkowsky noted in the excellent "But There's Still A Chance, Right?") and conventionally behave as though any probability greater than zero must be non-negligible, which is of course wrong. Which is why people reply "No. No chance." But this is technically incorrect, and it's occasionally important to note it's technically incorrect - David Gerard (talk) 13:54, 6 June 2012 (UTC)
It's particularly important when you are making a living by pascal-mugging people using incredibly tiny risks (or the risks you made up which are difficult to estimate) to big words like Mankind. Speaking of which, another instance of making incredibly low probabilities: long sequences of sloppy inferences. Dmytry (talk) 14:51, 6 June 2012 (UTC)
@DavidGerard, You don't divide by zero. If you are using Bayes theorem, P(B) can't be zero, and so if it is you can't use Bayes theorem. But there is a lot more to probability theory than just Bayes theorem. It's funny how you combine cluelessness with a continual insistence that I "shut up". You are anger bear. (((Zack Martin))) 20:30, 6 June 2012 (UTC)
Maratrean, I can't find DG being either clueless or especially even saying once you should shut up. So shut the fuck up. (Just kidding, I don't want to shut anyone up on the RWiki, but your accusation is way out of line.) ħumanUser talk:Human 03:12, 7 June 2012 (UTC)
He is clueless; for example, he accuses me of having a problem with log odds when I don't, and there is no evidence that I do. And he seems to think that "division by zero" somehow makes zero or one probabilities invalid, when it doesn't. It actually doesn't matter what I say, he'll just tell me to shutup anyway. (((Zack Martin))) 08:45, 8 June 2012 (UTC)
There is in fact no evidence whatsoever that you have ever said anything useful, relevant or informative on any topic at any time, and considerable evidence that you consistently confuse your ignorance with knowledge and then insist on enlightening the world with it. You're a waste of electrons, time and oxygen. Just shut the fuck up, you bloviating prick - David Gerard (talk) 11:58, 8 June 2012 (UTC)
Wow, you are an amazingly hostile person. What is wrong with you? If you aren't interested in what I have to say, why not simply ignore me? Maybe someone else will find my contributions more valuable than you do. (((Zack Martin))) 13:05, 8 June 2012 (UTC)
That would only be possible if they were a bloviating fuckwit in waiting. As such, Non-Bloviating-Fuckwit Decision Theory proves mathematically that they should be ignored at the least, and preferably actively hampered for the good of humanity - David Gerard (talk) 14:32, 8 June 2012 (UTC)
I don't expect anyone to agree with everything I say, but the fact that sometimes people (other than you) agree with me indicates that they find some value in at least some of my contributions. Like Pi beginning this section with "I have to agree with Mara on this one"... is Pi a bloviating fuckwit in waiting ? (((Zack Martin))) 08:13, 9 June 2012 (UTC)
If you said the sky is blue, this would not be evidence that your existence has a fucking point - David Gerard (talk) 10:32, 9 June 2012 (UTC)
I think everyone's existence has a point — even yours. You seem to be a very negative and hateful person. (((Zack Martin))) 11:16, 9 June 2012 (UTC)
If you have a purpose, it's as what not to be. I love everyone, except you - David Gerard (talk) 12:17, 9 June 2012 (UTC)
Wow, you are something special. "I love everyone, except you" (((Zack Martin))) 12:43, 9 June 2012 (UTC)
@David Why do you feel this way? I feel this way ('confuse ignorance with knowledge') about much what Eliezer is doing with quantum mechanics, Kolmogorov etc. especially the AI, knowing nothing and going on - quite explicitly so - how he's the most rational person. Dmytry (talk) 11:57, 12 June 2012 (UTC)
Large parts of existence as a human are about judgement under uncertainty - but there are times when people should damn well know that their ignorance isn't data (and yes, LW does it a lot). It's why nerds are nuts - David Gerard (talk) 23:14, 12 June 2012 (UTC)
@David Gerard : Engineers are the sane ones compared to non-engineers like Eliezer. I question your consistency - how exactly is Maratrean worse than a guy who actively promotes such 'judgement under not knowing jack shit'? The guy who takes money for work on 'friendly artificial intelligence'? The guy whose writings are the very pinnacle of being a 'bloviating fuckwit'? I've read some of Maratrean's essays, nothing under any reading gets even remotely close to EY. Dmytry (talk) 05:17, 13 June 2012 (UTC)
EY hasn't spent a couple of years posting tl;dr blether to RW - David Gerard (talk) 09:37, 13 June 2012 (UTC)

@ David Gerard And Maratrean hasn't spent more than a decade promoting pseudoscience on the internet at large. The real pseudoscience, the one that looks scientific but isn't, not the lame variations that don't even look scientific. EY doesn't just promote woo/bs, he creates woo (out of science fiction), promotes it, and promotes art of woo creation (aka 'rational' speculation without attention to any subtleties of the subject). He also created an enormous battery of 'cognitive biases' allowing to label any sort of reasoning which takes into account subtleties you don't know of, as cognitive bias (technically, being correct is having a biased distribution, too). Even if you actually make a demonstrable fallacy, there's the way out, via this very argument that the opposite side is just too good at knowing fallacies, for their own good (forgot what that article was called). Dmytry (talk) 11:49, 13 June 2012 (UTC)

Isn't that what they refer to as "undiscriminating skepticism"? Basically, if you are an atheist but not a Yuddite, you are an atheist because you hang out with moderately educated people who are mostly atheists. And similarly, people who are skeptical of cryonics are so because of tribal signalling due to Penn and Teller. Here: http://lesswrong.com/lw/1ww/undiscriminating_skepticism/ Do these guys seriously not see the big beam in their eyes? What is this ( http://lesswrong.com/lw/cfe/i_stand_by_the_sequences/ ) if not tribal signalling? Or even more than that, a list of articles of faith?--Baloney Detection (talk) 20:36, 13 June 2012 (UTC)
Yup. They basically slap the label "improving art of human rationality" onto "bloviating about irrationality of those who disagree with things for which we don't even have a well defined argument". There's nothing about how to actually finding the truth. The truth should magically appear if there's no biases. Is this actually a very bad and offensive comment: http://lesswrong.com/lw/ai9/how_do_you_notice_when_youre_rationalizing/5y3w ? What I am to think of getting -5 (-9 +4 or so) on explanation of rigorous reasoning and the accumulation of reasoning errors in fuzzy reasoning? (that was before I ever flamed their AI bullshit) Dmytry (talk) 09:50, 14 June 2012 (UTC)

Bob Carroll of Skeptic's Dictionary on LW (extremely briefly)[edit]

Here is an e-mail I sent him, and his reply. The hotlinks in the original e-mails are not included in this reposting (except the written-out link to LW). My original e-mail below:

Dear Dr. Carroll

I wonder if you would consider LessWrong ( http://lesswrong.com ) to be a trustworthy, reliable site, if you are familiar with it. It is based on the writings of Eliezer Yudkowsky whom you have a reference to. They write a lot about rationality, the human mind, and cognitive biases. But on the other hand, they also promote cryonics, the singularity and other topics of dubious scientific validity (Yudkowsky claims that parents who don't sign up their children for cryonics are "lousy parents").

Is it a site you would consider linking to, or a site you would dismiss as woo?

Sincerely yours, [Name]

And his reply below here:

woo

thanks.....

Bob Carroll The Skeptic's Dictionary Subscribe to our newsletter Try the SD app for iPhone Skeptic's Dictionary for Kids new! Unnatural Acts: Critical Thinking, Skepticism, and Science Exposed!

I hope my e-mail was fairminded enough. I tried to include both positive and negative aspects of LW in it, while still keeping it as brief as possible. --Baloney Detection (talk) 20:19, 10 June 2012 (UTC)

Speaking of which - you can maybe ask Holden Karnofsky if he knew of Roko thing when he was evaluating SI. As far as charity evaluation is concerned Roko thing is unforgivable sin - apparently you're supposed not to think through the faults in the moral philosophy of the FAI. However you look at it, it's total frigging idiocy. Actually I doubt he heard of Roko affair before he written review; if he did that would of saved him quite a bit of time. Dmytry (talk) 21:47, 10 June 2012 (UTC)
I've never heard of him, but Googling and checking his site, they appear to be in bed with the Yuddites.--Baloney Detection (talk) 21:28, 12 June 2012 (UTC)
Really? He criticized them pretty harshly: http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/ Dmytry (talk) 21:30, 12 June 2012 (UTC)
I see he did. My mistake, though at his own site he seemed more positive toward them. Perhaps I'll send him an e-mail.--Baloney Detection (talk) 15:10, 17 June 2012 (UTC)