Talk:LessWrong/Archive1
Much article content from[edit]
Since it's GDFL, noted here. 32℉uzzy; 0℃atPotato (talk/stalk) 00:23, 6 June 2017 (UTC)
article[edit]
It is funny. The artickle. I lick it. Monkey brains speshulyy. ħuman 08:35, 4 May 2010 (UTC)
- Yeah, very true. Maybe a bit more writing and fewer bullet points but all pretty much spot on. My main criticism of Yudkowsky is that he's very much declared himself Wisest Human and goes from there, assuming everyone else must be a total fuckwad for not using his exact definitions of every word and phrase ever - see his posts on "devil's advocate" and "quantum theory isn't weird" and you'll quickly see what a douche he can be on that sort of thing. d hominem 12:14, 4 May 2010 (UTC)
- I really liked the "quantum theory isn't weird" post and appreciated his point! Though it's utterly useless when trying to deal meaningfully with one's fellow humans - David Gerard (talk) 16:15, 4 May 2010 (UTC)
- It made a decent point, yes, and then he went off to be a dick about it. He basically said that if you use those words you'd "never understand it". Essentially denying that anyone is capable of using a metaphor or simile or using any kind of language other than Lojban to describe QM. Now if a Professor researching QM read that post and said "ahhh, never thought of that before" then maybe we'd have something to worry about, but no, that prof would be most likely to say "Yeah, and? It's weird compared to our normal interpretation of the world. I use the terms "weird" and "bizarre" to inspire and interest students so that initially they don't get stumped by the oddities that the maths produces". So when he isn't just going "I'm more intelligent than you therefore I'm right and you're stupid" it's fine. d hominem 16:56, 4 May 2010 (UTC)
- People are just scared of the things that they know becoming common knowledge. It makes them feel threatened. For example, I don't understand 1/4 of the math required for a Shannon entropy line, but I know, (in theory) what it's supposed to do....OpalHonors (talk) 11:42, 27 October 2010 (UTC)
- It made a decent point, yes, and then he went off to be a dick about it. He basically said that if you use those words you'd "never understand it". Essentially denying that anyone is capable of using a metaphor or simile or using any kind of language other than Lojban to describe QM. Now if a Professor researching QM read that post and said "ahhh, never thought of that before" then maybe we'd have something to worry about, but no, that prof would be most likely to say "Yeah, and? It's weird compared to our normal interpretation of the world. I use the terms "weird" and "bizarre" to inspire and interest students so that initially they don't get stumped by the oddities that the maths produces". So when he isn't just going "I'm more intelligent than you therefore I'm right and you're stupid" it's fine. d hominem 16:56, 4 May 2010 (UTC)
- I really liked the "quantum theory isn't weird" post and appreciated his point! Though it's utterly useless when trying to deal meaningfully with one's fellow humans - David Gerard (talk) 16:15, 4 May 2010 (UTC)
cough[edit]
[1] - David Gerard (talk) 10:11, 4 May 2010 (UTC)
- I wonder if this will actually reduce the chances of LessWrong users looking at the article. And either way, what that says about them and him - David Gerard (talk) 14:57, 4 May 2010 (UTC)
- Mostly RW is apparently just not highly evolved enough. "Yes, they're pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from ..." I'm sure there are people there human enough to enjoy RW, but, um - David Gerard (talk) 15:44, 4 May 2010 (UTC)
- Blimey, what a pompous bunch they are! DeltaStarSenior SysopSpeciationspeed! 15:54, 4 May 2010 (UTC)
Wow, more up their own arse than I thought.(I've decided to retract this point as it is a grossly unfair generalisation, on further review, it's not that bad 13:25, 20 September 2010 (UTC)) Although Yudkowsky does step in with some sense, namely not launching war upon a website just because one thing was wrong. But my my, they mostly seem to miss the point. RW is accessible to things, while LessWrong is dry as a bone and pretty much only for the hardcore rationalists, rather than an entry point or for refuting. Now, if only they could stop licking Bayes' arse for more than a minute and maybe, you know, grow a sense of humour... d hominem 16:43, 4 May 2010 (UTC)- What's the prior probability of that? - David Gerard (talk) 17:40, 4 May 2010 (UTC)
- That conversation is very interesting, and it shows a lot about the site, both good and bad. First, it demonstrates the kind of hero-worship of Yudkowsky that goes on over there, between bragging about how smart he is and linking to his articles. However, as Armond says, the man does have enough sense to go around, and he realizes that RW doesn't focus on determining what it woo, it focuses on refuting things that are already deemed to be woo by the scientific community. Tetronian you're clueless 21:13, 4 May 2010 (UTC)
- What's the prior probability of that? - David Gerard (talk) 17:40, 4 May 2010 (UTC)
- At least most of them couldn't quite bring themselves to go as far in the name of rationality as the member who said you couldn't hope to be properly rational until you cut your balls off - David Gerard (talk) 17:40, 4 May 2010 (UTC)
- Are all of their essays cross-referenced messes that would require a hefty tome to completely contain? Professor Moriarty 17:42, 4 May 2010 (UTC)
- Yes.[citation NOT needed] But at least Yudkowsky is writing Harry Potter fanfic rather than Atlas Shrugged - David Gerard (talk) 17:46, 4 May 2010 (UTC)
- They're Vulcans, that's the only explanation. But really, it kind of takes all sorts to make the world function. It's all very good for cutting edge hardcore rationality, but not exactly good for stepping back and noticing that 10% (sorry, I can't say "10%", that's just an inference, formed by my inferior mind, not a real probability... ad nauseum) of the time when it's gone a little too far into the realm of window licking mentalness. d hominem 18:08, 4 May 2010 (UTC)
- Hey, it only took Spock 150 years to realise letting yourself be human is a lot more fun, there's hope for LessWrong yet - David Gerard (talk) 18:44, 4 May 2010 (UTC)
- They're Vulcans, that's the only explanation. But really, it kind of takes all sorts to make the world function. It's all very good for cutting edge hardcore rationality, but not exactly good for stepping back and noticing that 10% (sorry, I can't say "10%", that's just an inference, formed by my inferior mind, not a real probability... ad nauseum) of the time when it's gone a little too far into the realm of window licking mentalness. d hominem 18:08, 4 May 2010 (UTC)
- Yes.[citation NOT needed] But at least Yudkowsky is writing Harry Potter fanfic rather than Atlas Shrugged - David Gerard (talk) 17:46, 4 May 2010 (UTC)
- Are all of their essays cross-referenced messes that would require a hefty tome to completely contain? Professor Moriarty 17:42, 4 May 2010 (UTC)
- Blimey, what a pompous bunch they are! DeltaStarSenior SysopSpeciationspeed! 15:54, 4 May 2010 (UTC)
- Mostly RW is apparently just not highly evolved enough. "Yes, they're pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from ..." I'm sure there are people there human enough to enjoy RW, but, um - David Gerard (talk) 15:44, 4 May 2010 (UTC)
Funnily enough...[edit]
...I'm guilty of many of these. Tetronian you're clueless 11:56, 4 May 2010 (UTC)
- Attest to increased rationality! See if you can bring any around to call us drunken idiots - David Gerard (talk) 14:57, 4 May 2010 (UTC)
- Gah, I have spent 4 years of my life trying to prove the human brain actually uses a form of Bayesian processing in reinforcement learning. I suddenly feel dirty. Alas poor Fisher, perhaps I should return to the fold. tmtoulouse 17:17, 4 May 2010 (UTC)
- It's best to think of RationalWiki as a bunch of drunken sceptics loudly shouting "BULLSHIT!" and then returning in the morning to do their references, maybe - David Gerard (talk) 17:40, 4 May 2010 (UTC)
- A good analogy, perhaps. Worth adding to the article we have on ourselves. d hominem 18:12, 4 May 2010 (UTC)
- Added! - David Gerard (talk) 18:46, 4 May 2010 (UTC)
- A good analogy, perhaps. Worth adding to the article we have on ourselves. d hominem 18:12, 4 May 2010 (UTC)
- It's best to think of RationalWiki as a bunch of drunken sceptics loudly shouting "BULLSHIT!" and then returning in the morning to do their references, maybe - David Gerard (talk) 17:40, 4 May 2010 (UTC)
- Gah, I have spent 4 years of my life trying to prove the human brain actually uses a form of Bayesian processing in reinforcement learning. I suddenly feel dirty. Alas poor Fisher, perhaps I should return to the fold. tmtoulouse 17:17, 4 May 2010 (UTC)
"The moral"[edit]
In my mind, this is the most interesting point. LW users are the first to point out that people are irrational - they have a boatload of articles on cognitive biases, which show how people are astonishingly bad at logical reasoning. They admit that overcoming biases is hard, almost impossible even. However, they seem to think that using Bayesian inference somehow reduces these cognitive biases to almost nothing. Though it is an excellent statistical tool, LessWrongians completely forget just how arbitrary their priors are, and they use Bayes to "solve" problems that only have one or two pieces of evidence. Unless there's something I'm missing, LessWrongians have a little too much faith in their own method. When I'm feeling literate I'll add that to the article. Tetronian you're clueless 22:22, 4 May 2010 (UTC)
- It's the singularitarian/transhuman error: attempting to achieve escape velocity. If words could cure it, they would have by now.
- I think of when I did Internet tech support. (1999-2000. Still show PTSD when I hear a modem connect.) The co-workers who bitched loudest about the stupidity of the users? The ones who asked me how to use their email and seemed utterly unable to read Outlook help themselves - David Gerard (talk) 22:55, 4 May 2010 (UTC)
- The phrase "escape velocity" there comes from this book, which I don't have a copy of. Need to get one and write the article. Forgetting you're an ape, or thinking you can think your way out of being an ape, is the commonest way of being bloody stupid amongst really smart people who aren't as yet in tune with their own stupidities - David Gerard (talk) 22:58, 4 May 2010 (UTC)
- Well, I don't fault them for trying to overcome bias or for using math to do so. It's a worthy goal, particularly for scientists, and they are admittedly pretty good at it, at least for "cold," non-emotional problems. However, as Trent pointed out in one of his essays, unless the problem you are dealing with is very simple, it is impossible to figure out p(hypothesis|data). Human error and bias are pretty much inevitable. Also, I think LWians don't realize how their emotions play into their decision-making - they are good about choosing processes that are less prone to error, but they miss how bias affects their truth-seeking as a whole, particularly where rationalization is concerned. Tetronian you're clueless 23:42, 4 May 2010 (UTC)
- The phrase "escape velocity" there comes from this book, which I don't have a copy of. Need to get one and write the article. Forgetting you're an ape, or thinking you can think your way out of being an ape, is the commonest way of being bloody stupid amongst really smart people who aren't as yet in tune with their own stupidities - David Gerard (talk) 22:58, 4 May 2010 (UTC)
This is just amazing[edit]
Good Lord. The social anthropology of LessWrong revealed in this discussion is amazing.
(By the way, it appears there's almost no-one on that discussion who knows neurology. Specifically, the brain is in fact made of a whole mess of specialist coprocessors, and the social ones in particular are huge.) - David Gerard (talk) 15:31, 16 May 2010 (UTC)
- I agree. When I was young and abstractly smart, my social skills sucked. But instead of grabbing a label that let me say "lack of social skills = exceptionally brilliant" I used my brainz to learn how to interact with people of all stripes. 5. Profit! ħuman 04:52, 17 May 2010 (UTC)
Ownership of the word rationalism[edit]
From article:
- Can we come up with a better example here? I think all three of those used the ontological argument as the basis of their philosophy (I'm not sure about Spinoza, but the other two definitely). They wouldn't have lasted a day here either. — Unsigned, by: 87.24.161.134 / talk / contribs
I listed them as inventors and famous examples of the philosophical term, though I see what you mean. LessWrong's ambit claim over the term is the point; any better examples, then? - David Gerard (talk) 17:50, 9 August 2010 (UTC)
Cthulhuian idea[edit]
Is there any particular reason for, er, not saying what that is? I have no particular objection to in-jokes, but come on. Webbtje (talk) 22:33, 18 September 2010 (UTC)
- I don't we knew what it was either, at least not at first. All the information we have is here --Oniontalk/edits 22:35, 18 September 2010 (UTC)
- AI blackmail? Sounds like Skynet to me. Webbtje (talk) 22:42, 18 September 2010 (UTC)
- On second thoughts, this is too complicated for my puny social-sciency brain. Sod it. Webbtje (talk) 22:45, 18 September 2010 (UTC)
- Nah, your request is fair. DG, etc., please explain what you are writing rather than saying "search RW". The talk page is fairly obtuse anyway. Why not drop the charade and say what it is you are trying to say? ħuman 04:46, 19 September 2010 (UTC)
- I was thinking of it as "spoiler warning". Feel free to link it if you like - David Gerard (talk) 11:58, 19 September 2010 (UTC)
- I still don't get it. What is "banned" or "evil" or whatever? It makes little to no sense. d hominem 13:56, 19 September 2010 (UTC)
- That's exactly the point, it is irrational to ban this silly science fiction idea. --Oniontalk/edits 14:11, 19 September 2010 (UTC)
- Basically, reading about this idea makes you more likely to be tortured or killed by an AI if we should ever build one. That's why they don't want anyone to think about it. Once you read it (and I have) it'll make a little more sense. Tetronian you're clueless 15:14, 19 September 2010 (UTC)
- This is why I'm highly suspicious of anyone who is in the intelligence bracket above me, the odds of you going completely off the deep end and coming full circle to "lunatic" become exponentially higher from that point on. d hominem 15:16, 19 September 2010 (UTC)
- The LW crowd isn't "crazy" per se, but they hold some odd beliefs because no one is there to refute them. But if you put yourself in the place of an average LWian, it's easy to see why this idea is so scary. Tetronian you're clueless 15:18, 19 September 2010 (UTC)
- Well, true, but I was thinking about this recently and basically, the better informed and more intelligent someone is, the more likely they are to be right (I often joke that things would go a lot faster and smoother if everyone just assumed I'm right from the outset, hey, I'm just playing the odds here! ). So, by a bit of fuzzy logic you can easily imagine that when such people are wrong or go a bit off the rails, they do so spectacularly, because 1) they're not used to being anything but right and 2) there's no one who can convince them otherwise. It's essentially the same pitfall as your normal fundamentalism. This is why I purposefully try to keep things very, very, very simple. There's absolutely no point in immersing yourself in the world of Bayes theorem, cognitive biases, metaphysics or anything associated with "hardcore" rationalism if by doing so you close yourself off to the absolute basics of scepticism and the fundamental idea that not one idea, not even your own, is above questioning and being wrong. And that's where I find that your "hardcore" rationalists can "talk the talk" but fail to "walk the walk" in that they constantly talk about how ideas and people can be fallible and biased and so you need defences against it, but they fail to apply it to themselves as rigidly, giving themselves a free pass around the filter of doubt and questioning. Of course, that'd drifting a little off topic so I need to pull myself back in. In line with the theme of keeping things as simple as possible, the article doesn't explain this "cuthulian" idea at all, or why it's frowned on. It's far to cryptic and the "search RW" thing is fundamentally stupid - because no hint is given as to what to look for. The links to LW don't help much either, because these threads are huge and there is no hint about what to look for. Understanding them on their own is also pretty much impossible because most of them are written in the style of one almighty in-joke that you only get if you're part of the in-group. The entire section seems devoid of the principle rule of pedagogy and conveying information; "assume no prior knowledge". d hominem 15:41, 19 September 2010 (UTC)
- For the most part I agree with you, and interestingly enough I think many LWians would as well. As I understand it, they think that studying all of that stuff will help them assess ideas that they encounter. The problem, of course, is that it remains nearly impossible to account for your own biases even if you know about them. As David Gerard says, we are still "apes with pretensions." I don't think they believe that there ideas are above questioning, but since no one is there to question LW's ideas they just get carried away with them. Also, many of them are signed up for cryonics, which gives them a vested interest in the Singularity. So that only makes matters worse.
- I agree that it is hard to understand LW threads unless you've been around there for a while; the link you seek can be found on David Gerard's talk page. Don't blame me if an AI kills you one day, haha. Tetronian you're clueless 15:54, 19 September 2010 (UTC)
- Well, it's more that if you're in the sort of environment where everyone backs you up, you'll get used to it and stop questioning even your own thoughts. So I think some of them have got to the point where cognitive dissonance might be setting in or they just don't realise how paranoid they're being with respects to AI going out of control. d hominem 16:01, 19 September 2010 (UTC)
- Goddamn you smart people can be retarded sometimes. Jabbering on about FAI bullshit makes you less rational, not more. Reading EY's response makes me quite certain that he's a kook on the level of Gene Ray or somebody. A smart and brilliant kook, but a kook nonetheless. At any rate, here's the link, somebody should host it on a page at RW, maybe Lesswrong/Cthuhlu? --The Emperor Kneel before Zod! 16:53, 19 September 2010 (UTC)
- I don't think he's a kook, he just thinks AI is a very, very serious issue and as a result he occasionally gets angry about it. Also, he knew he was going to delete his own post before he wrote it, so he knew he could get really pissed and get away with it. Tetronian you're clueless 17:18, 19 September 2010 (UTC)
- Goddamn you smart people can be retarded sometimes. Jabbering on about FAI bullshit makes you less rational, not more. Reading EY's response makes me quite certain that he's a kook on the level of Gene Ray or somebody. A smart and brilliant kook, but a kook nonetheless. At any rate, here's the link, somebody should host it on a page at RW, maybe Lesswrong/Cthuhlu? --The Emperor Kneel before Zod! 16:53, 19 September 2010 (UTC)
- Well, it's more that if you're in the sort of environment where everyone backs you up, you'll get used to it and stop questioning even your own thoughts. So I think some of them have got to the point where cognitive dissonance might be setting in or they just don't realise how paranoid they're being with respects to AI going out of control. d hominem 16:01, 19 September 2010 (UTC)
- Well, true, but I was thinking about this recently and basically, the better informed and more intelligent someone is, the more likely they are to be right (I often joke that things would go a lot faster and smoother if everyone just assumed I'm right from the outset, hey, I'm just playing the odds here! ). So, by a bit of fuzzy logic you can easily imagine that when such people are wrong or go a bit off the rails, they do so spectacularly, because 1) they're not used to being anything but right and 2) there's no one who can convince them otherwise. It's essentially the same pitfall as your normal fundamentalism. This is why I purposefully try to keep things very, very, very simple. There's absolutely no point in immersing yourself in the world of Bayes theorem, cognitive biases, metaphysics or anything associated with "hardcore" rationalism if by doing so you close yourself off to the absolute basics of scepticism and the fundamental idea that not one idea, not even your own, is above questioning and being wrong. And that's where I find that your "hardcore" rationalists can "talk the talk" but fail to "walk the walk" in that they constantly talk about how ideas and people can be fallible and biased and so you need defences against it, but they fail to apply it to themselves as rigidly, giving themselves a free pass around the filter of doubt and questioning. Of course, that'd drifting a little off topic so I need to pull myself back in. In line with the theme of keeping things as simple as possible, the article doesn't explain this "cuthulian" idea at all, or why it's frowned on. It's far to cryptic and the "search RW" thing is fundamentally stupid - because no hint is given as to what to look for. The links to LW don't help much either, because these threads are huge and there is no hint about what to look for. Understanding them on their own is also pretty much impossible because most of them are written in the style of one almighty in-joke that you only get if you're part of the in-group. The entire section seems devoid of the principle rule of pedagogy and conveying information; "assume no prior knowledge". d hominem 15:41, 19 September 2010 (UTC)
- The LW crowd isn't "crazy" per se, but they hold some odd beliefs because no one is there to refute them. But if you put yourself in the place of an average LWian, it's easy to see why this idea is so scary. Tetronian you're clueless 15:18, 19 September 2010 (UTC)
- This is why I'm highly suspicious of anyone who is in the intelligence bracket above me, the odds of you going completely off the deep end and coming full circle to "lunatic" become exponentially higher from that point on. d hominem 15:16, 19 September 2010 (UTC)
- Basically, reading about this idea makes you more likely to be tortured or killed by an AI if we should ever build one. That's why they don't want anyone to think about it. Once you read it (and I have) it'll make a little more sense. Tetronian you're clueless 15:14, 19 September 2010 (UTC)
- That's exactly the point, it is irrational to ban this silly science fiction idea. --Oniontalk/edits 14:11, 19 September 2010 (UTC)
- I still don't get it. What is "banned" or "evil" or whatever? It makes little to no sense. d hominem 13:56, 19 September 2010 (UTC)
- I was thinking of it as "spoiler warning". Feel free to link it if you like - David Gerard (talk) 11:58, 19 September 2010 (UTC)
- Nah, your request is fair. DG, etc., please explain what you are writing rather than saying "search RW". The talk page is fairly obtuse anyway. Why not drop the charade and say what it is you are trying to say? ħuman 04:46, 19 September 2010 (UTC)
- On second thoughts, this is too complicated for my puny social-sciency brain. Sod it. Webbtje (talk) 22:45, 18 September 2010 (UTC)
- AI blackmail? Sounds like Skynet to me. Webbtje (talk) 22:42, 18 September 2010 (UTC)
(undent)Is that not just a really dumb rephrasing of Pascal's Wager with respect to an apparently all powerful artificial intelligence, or am I missing something more subtle? And then Yudkowsky goes all "wisest human" and starts making even less sense than the original post? Again, am I missing something? It seems as if it's lines 66 and 67 that are where the issue is:
“”Meanwhile I'm banning this post so that it doesn't (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I'm not sure I know the sufficient detail.)
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends. |
So, he's basically saying that some hypothetical future entity will read this post and think "OMG, I have to blackmail/torture this guy for the lulzz!!". Fucksake, if you're thinking is that paranoid, you may as well stop masturbating because God is watching you disapprovingly. But again, maybe I'm missing something because the entire style appears to be one big in-joke. d hominem 17:04, 19 September 2010 (UTC)
- Also, isn't the entire thing just the same as the threat of hell? Both seem to be threatening people with an unprovable and hypothetical damnation based on criteria that are poorly understood and explained. Fuck me, this really is actually a religion isn't it? Again, please correct me if I'm reading this entire thing wrong, I still haven't managed to distil out the "dangerous idea" itself in simple enough terms. d hominem 17:12, 19 September 2010 (UTC)
- That's not really his point. What he is saying depends on a few concepts that he and SIAI have been working on for the past few years: CEV (combined extracted volition) and TDT (timeless decision theory). Basically, what EY wants to do is make AI friendly by making it want to do what we want to do. However, he realized that if you make it do what we want to do now, you are causing it to "lock in" whatever values society currently holds, which includes things like racism, etc. that may change over time. So, he decided that it should use CEV, which is what humans would want if we were smarter and understood each other better. In reality his idea is more complicated than that, that's just a summary. TDT is based on the idea that conventional decision theory fails when dealing with entities that can accurately predict your decisions (e.g. Newcomb's problem), which would apply when dealing with AI. As I understand it TDT isn't finished yet, but the basic idea is that is that you should honestly commit to doing something if your opponent knows you plan to lie (e.g. you should actually one-box on Newcomb's problem). As a result, EY is afraid that if CEV wants to torture people who don't contribute to making a positive Singularity happen (I personally am not sure why it would, but apparently Roko proved that it could), then it will actually do it. So, if you are aware of this chain of reasoning, then you will be tortured. I think. Tetronian you're clueless 17:14, 19 September 2010 (UTC)
- So, the entire premise is based on three things 1) that their concept of the Singularity, which is starting to look more and more like one of those "at the appointed hour, the messenger will deliver us to the spaceship" type suicide cults, is inevitable and correct 2) that everything Yudkowsky says about this is and loses sleep about is also 100% correct and not just the Gene Ray style ravings of a man who has totally gone off the deep end and 3) that people understand what all of this is about in order to expose themselves to the threat. Now, I'm willing to remain agnostic but snarky about 1 and 2 but as 3 appears to be so poorly explained that I doubt anyone understands it, I think I'm pretty safe from eternal torture by an AI for "not believing". d hominem 17:25, 19 September 2010 (UTC)
- I presume this will be the point where Yudkowsky himself signs up to RationalWiki and shouts at me for being a moron and having too many biases. d hominem 17:28, 19 September 2010 (UTC)
- Nah, he's generally a nice guy. As David Gerard once put it: "EY is clearly human. The others (at LW) not so much." But as for your three premises: (1) is definitely what EY and LW believe, but they have their own version of it: he thinks it's going to happen when AI learn to self-modify and make themselves smarter and better at making themselves smarter. (2) is probably true (at least about the losing sleep), since it is the simplest explanation for why he's angry. As for (3): based on the other threads linked to in our article, some of the LWians not only get the idea but figured it out on their own after EY deleted Roko's post. So if you accept the premise of self-replicating AI, the CEV, and TDT, this conclusion does make sense. Of course, only LW accepts these things, so Roko's conclusion is absurd to the rest of us. Tetronian you're clueless 17:34, 19 September 2010 (UTC)
- I presume this will be the point where Yudkowsky himself signs up to RationalWiki and shouts at me for being a moron and having too many biases. d hominem 17:28, 19 September 2010 (UTC)
- So, the entire premise is based on three things 1) that their concept of the Singularity, which is starting to look more and more like one of those "at the appointed hour, the messenger will deliver us to the spaceship" type suicide cults, is inevitable and correct 2) that everything Yudkowsky says about this is and loses sleep about is also 100% correct and not just the Gene Ray style ravings of a man who has totally gone off the deep end and 3) that people understand what all of this is about in order to expose themselves to the threat. Now, I'm willing to remain agnostic but snarky about 1 and 2 but as 3 appears to be so poorly explained that I doubt anyone understands it, I think I'm pretty safe from eternal torture by an AI for "not believing". d hominem 17:25, 19 September 2010 (UTC)
- That's not really his point. What he is saying depends on a few concepts that he and SIAI have been working on for the past few years: CEV (combined extracted volition) and TDT (timeless decision theory). Basically, what EY wants to do is make AI friendly by making it want to do what we want to do. However, he realized that if you make it do what we want to do now, you are causing it to "lock in" whatever values society currently holds, which includes things like racism, etc. that may change over time. So, he decided that it should use CEV, which is what humans would want if we were smarter and understood each other better. In reality his idea is more complicated than that, that's just a summary. TDT is based on the idea that conventional decision theory fails when dealing with entities that can accurately predict your decisions (e.g. Newcomb's problem), which would apply when dealing with AI. As I understand it TDT isn't finished yet, but the basic idea is that is that you should honestly commit to doing something if your opponent knows you plan to lie (e.g. you should actually one-box on Newcomb's problem). As a result, EY is afraid that if CEV wants to torture people who don't contribute to making a positive Singularity happen (I personally am not sure why it would, but apparently Roko proved that it could), then it will actually do it. So, if you are aware of this chain of reasoning, then you will be tortured. I think. Tetronian you're clueless 17:14, 19 September 2010 (UTC)
"10^50 lives are at stake." - line 245. Where does this number come from? Is that their calculation of how many humans will have lived before this singularity thing occurs? That's a lot of people! I still don't understand the apparent causal link between "thinking about" this bad thing (AI or superintelligence blackmailing or torturing people? WTF?) and it occurring. Is it a quantum path thing where the thoughts/understanding increase one's chances of being on a path that leads to them occurring? ħuman 20:09, 19 September 2010 (UTC)
- Not sure where that number came from. Here is the link, to quote from Roko's post:
“”[T]here is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. This would act as an incentive to get people to donate more to reducing existential risk, and thereby increase the chances of a positive singularity. This seems to be what CEV (coherent extrapolated volition of humanity) might do...You could take this possibility into account and give even more to x-risk in an effort to avoid being punished. But of course, if you're thinking like that, then the CEV-singleton is even more likely to want to punish you... nasty. Of course this would be unjust, but is the kind of unjust thing that is oh-so-very utilitarian.
|
- Make a little more sense now? Tetronian you're clueless 20:15, 19 September 2010 (UTC)
- In a word "no". But I suppose to simpletons like ourselves, we shall have to be content with just understanding what they're trying to say, rather than declaring that it "makes sense". d hominem 20:22, 19 September 2010 (UTC)
- Lemme try again...
- 1) The CEV-AI might decide to "punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes" as an incentive to donate.
- 2) If you know this fact, you might donate slightly more to avoid punishment.
- 3) But, in doing so, this means that you understand the severity of the situation better than other people and so this would give the CEV-AI more reason to punish you.
- So, if you know about this "blackmail," you will be punished. That's why EY doesn't want anyone to think about it. Tetronian you're clueless 20:27, 19 September 2010 (UTC)
- Btw, I feel like I should add two things: (1) I personally do not believe in the Singularity, and (2) I think the idea of CEV is absurd at best, dangerous at worst. Tetronian you're clueless 20:29, 19 September 2010 (UTC)
- Yes, I get it. But it doesn't make sense. If Roko had pulled his head out of his ass and put it as those three bullet points the first time, I think we wouldn't be in this mess. This is what I mean when I talk about making things as simple as possible. d hominem 20:43, 19 September 2010 (UTC)
- True that :) Tetronian you're clueless 20:44, 19 September 2010 (UTC)
- Yes, I get it. But it doesn't make sense. If Roko had pulled his head out of his ass and put it as those three bullet points the first time, I think we wouldn't be in this mess. This is what I mean when I talk about making things as simple as possible. d hominem 20:43, 19 September 2010 (UTC)
- Btw, I feel like I should add two things: (1) I personally do not believe in the Singularity, and (2) I think the idea of CEV is absurd at best, dangerous at worst. Tetronian you're clueless 20:29, 19 September 2010 (UTC)
- In a word "no". But I suppose to simpletons like ourselves, we shall have to be content with just understanding what they're trying to say, rather than declaring that it "makes sense". d hominem 20:22, 19 September 2010 (UTC)
More of the above[edit]
So let me see if I've got this right. If I don't understand what those people are yammering on about, it won't hurt me, right? ħuman 20:34, 19 September 2010 (UTC)
- This particular scenario won't harm you, no. Tetronian you're clueless 20:40, 19 September 2010 (UTC)
- I already feel punished enough for having spent the last few hours trying to figure out what the hell Yudkovski et al. are on about. I think I got it now, but the scenario above still only applies to people who share two beliefs: 1) that the "singularity" will occur within their lifetime; and 2) FAI development is of such an overwhelming importance to the future of humanity that those who realize this are morally obliged to get the work done before the singularity occurs. Anyone who doesn't share either or both of these assumptions is off the hook. Whew. Röstigraben (talk) 20:50, 19 September 2010 (UTC)
- I was halfway through that awful thing when I realized EY post, the second one one reads, is the last post. That helped somewhat. So, Rosty, they are trying to build a friendly machine before an unfriendly one builds itself? Apart from them being a bit bonkers, what's to stop FAI from becoming uFAI? I bet Asimov has written that story (well) a half dozen times by now. ħuman 21:41, 19 September 2010 (UTC)
- This, I believe, is their answer. Tetronian you're clueless 21:44, 19 September 2010 (UTC)
- As I understood it, they're certain that the "singularity" - meaning an AI that can improve itself faster than humans could do with a competing AI - will inevitably occur, and this leads to an unpredictable future, it could be heaven or hell for humanity depending on whether it's friendly or not. Humans will still have to make the first development steps, but once the AI achieves superhuman intelligence, matters are no longer in our hands. So they think a lot is at stake, and within the framework of advanced morality and rationality they're developing, even a friendly AI would be justified to punish those who realized the importance of this event, but didn't contribute enough to having the FAI guidelines developed in time. Having the FAI in place would be of such monumental importance that extortion and torture of a few would be justified from an utilitarian standpoint, and unfortunately, it's going to hit the LessWrongians in this scenario, because they're the only ones who are even aware of the problem and could both be judged on their respective actions and be influenced to act. I don't quite get the stuff about "precommitting", because commitments involve an explicit statement (in this case, a threat) on the part of the AI, which it couldn't make before it exists, but maybe it makes sense in their world. The more I read about this stuff, the more I become convinced that it's just a bunch of smart cranks getting lost in the logical implications of their approach - Roko's post actually made sense within the confines of their own belief system. Röstigraben (talk) 22:05, 19 September 2010 (UTC)
- This, I believe, is their answer. Tetronian you're clueless 21:44, 19 September 2010 (UTC)
- I was halfway through that awful thing when I realized EY post, the second one one reads, is the last post. That helped somewhat. So, Rosty, they are trying to build a friendly machine before an unfriendly one builds itself? Apart from them being a bit bonkers, what's to stop FAI from becoming uFAI? I bet Asimov has written that story (well) a half dozen times by now. ħuman 21:41, 19 September 2010 (UTC)
- I already feel punished enough for having spent the last few hours trying to figure out what the hell Yudkovski et al. are on about. I think I got it now, but the scenario above still only applies to people who share two beliefs: 1) that the "singularity" will occur within their lifetime; and 2) FAI development is of such an overwhelming importance to the future of humanity that those who realize this are morally obliged to get the work done before the singularity occurs. Anyone who doesn't share either or both of these assumptions is off the hook. Whew. Röstigraben (talk) 20:50, 19 September 2010 (UTC)
As I "worry" about this singularity thing, I more and more wonder if its proponents have any idea how complex things like the supply chain for materials (to name but one hurdle) are. This AI thing would have to be able to safeguard every source and machine and power supply (etc.) along the pathway to "building" it successor. I can certainly envision HAL giving us the blueprints (well, the CAD CAM files) to build its successor, but I think we'd notice if it included giant claws and fangs and a tactical nuclear defense system. I guess the one thing we wouldn't be able to see happening is if HAL 12000 designed a circuit/software that could read, and then control, human minds. Is that their perceived pathway? A machine that can force us to build its designs, and can prevent us from pulling the plug on it? ħuman 00:35, 20 September 2010 (UTC)
- Yes, they assume that a superintelligence could outsmart humans and get them to do its bidding. Apparently, Yudkovski ran some experiments on this, two initial ones which he won as the AI, and then three more, of which he lost two. I'd really like to see those transcripts, but they're not releasing them. Röstigraben (talk) 06:23, 20 September 2010 (UTC)
- Cool, but without the transcripts, those links are basically EY masturbating in public. Lack of lab notes does not a good experiment make. In fact, lack of lab notes makes for a piss-poor experiment report. ħuman 06:31, 20 September 2010 (UTC)
- When I read something like "it can probably take over a human mind through a text-only terminal", something just seems wrong (obviously, I have too many biases and an insufficiently evolved mind to think about it ). The barriers between a computer in a black box, and something with actual and tangible power are far more immense than Yudkowksy seems to think. It's far more than typing "you are out" because you feel sorry for it and then some magical cloud of magic fills the room and proceeds to kill people. You're talking about giving it infrastructure, control, connections and actual bandwidth and "singularity" or otherwise, this is a huge leap. Also, there are a few flaws in the protocol, namely that part where the "Gatekeeper" can't just turn away and ignore it for two hours, which would obviously be the simplest way of the Gatekeeper "winning"or that the AI can't use alternative coercive means, which you'd think would be the quickest and most efficient way to go if the AI was as smart as they claim it would be (after all, they're already concerned about the AI being essentially evil, why does this property have to start when it's "out"). Still, the infrastructure that you have to provide is outlandishly difficult to gather. If I completely destroy the internet connection to my flat, then no matter how smart my desktop computer gets, it's not going to do any damage. It can beg and plead and manipulate all it likes, it's not going to get anything. And if it did someone take over my mind via text-only (one of the more ludicrous concepts I've read this week, as open as I am to the possibility) what will it get? I'll plug it into the net. Then what? HACKING DOESN'T WORK LIKE IN DIE HARD 4! What could it do? Vandalise Wikipedia for a couple of hours? I'd have a mere broadband connection, a post-singularity super brilliant AI wouldn't be able to do anything with it - and would probably get frustrated with it. It'd be like a human trying to take over the world by hacking it with snail mail that could only post one sheet of A5 with a single letter printed on it at any one time. Of course, this is assuming a black box in my house rather than some government funded Mega Machine, but I'm sure the principle scales. d hominem 07:11, 20 September 2010 (UTC)
- Yeah, what she said, I agree. These people are nuts, really. They may be smart, but their brains are failing them, sadly. ħuman 09:05, 20 September 2010 (UTC)
- Still, it's interesting that Yudkovski managed to convince even just a single one of the gatekeepers. Running out the clock with irrelevant blather would be an obvious winning strategy, and even if you choose to actually engage his arguments, intellectual honesty is explicitly not required. That means you could just resort to creationist debating tactics, take "the AI must not be let out" as an article of faith that must not be violated under any circumstances, and switch topics once you find yourself on the losing side of an argument. I'd like to see his face when the gatekeeper responds to a finely crafted argument by accusing him of being a liberal who wants to censor school prayer...Röstigraben (talk) 09:25, 20 September 2010 (UTC)
- Yes, but I think it smacks of incredible intellectual dishonesty to not release the transcripts. IE, I think he is lying. End of story. ħuman 09:33, 20 September 2010 (UTC)
- I've just wrote a multi-thousand word essay on the Burden of Proof and concluded just what you have in a mere one line... But if they don't have transcripts of these conversations that apparently constitute experiments, we do have the protocols published, anyone with a few hours to spare want to reproduce it? d hominem 09:51, 20 September 2010 (UTC)
- Be sure to use a scratch Eliezer - David Gerard (talk) 11:51, 20 September 2010 (UTC)
- EY may have done something like this. That's my guess, anyway. Tetronian you're clueless 12:36, 20 September 2010 (UTC)
- Though I'd happily chance it. Not that there's much of a "chance" in that respect. If I suddenly start being tortured, then I'm obviously inside the box, and can't press the "release the AI" button, can I? And if the AI was psychopathic enough to actually start doing that, I'd probably reach for the "delete the AI" button too. d hominem 12:42, 20 September 2010 (UTC)
- Actually, nevermind. The comments section here has some more reasonable guesses. Tetronian you're clueless 12:46, 20 September 2010 (UTC)
- It would be fun to replicate this experiment, but we'd have to either get a singularity true believer to play the part of the AI or put up some reward of our own for the winner. We'd have to make sure that both sides are really motivated to win, and singularity skeptics (like me and most others, apparently) would probably not go all the way to achieve a victory because, at least on an unconscious level, they'd like to prove Yudkovski wrong. It would be awesome if we could challenge LW to select a singularity adherent to do this against a RW skeptic, but I don't know if they'd accept that, especially since I'd definitely want to have the transcripts released afterwards. Röstigraben (talk) 12:58, 20 September 2010 (UTC)
- I suppose there is an issue of motives there. I certainly wouldn't phrase it as an "RW vs LW" thing; that would be retarded as the two speak almost different languages. There aren't many, if any at all, people who are regulars to RationalWiki who would be qualified to do the task, even as a Gatekeeper. I'd keep it with those with an extended knowledge in the field. But still, the point about whether people will be truly motivated to "win" such a contest based on their prior beliefs and attitudes is an interesting one. d hominem 13:31, 20 September 2010 (UTC)
- It would be fun to replicate this experiment, but we'd have to either get a singularity true believer to play the part of the AI or put up some reward of our own for the winner. We'd have to make sure that both sides are really motivated to win, and singularity skeptics (like me and most others, apparently) would probably not go all the way to achieve a victory because, at least on an unconscious level, they'd like to prove Yudkovski wrong. It would be awesome if we could challenge LW to select a singularity adherent to do this against a RW skeptic, but I don't know if they'd accept that, especially since I'd definitely want to have the transcripts released afterwards. Röstigraben (talk) 12:58, 20 September 2010 (UTC)
- Actually, nevermind. The comments section here has some more reasonable guesses. Tetronian you're clueless 12:46, 20 September 2010 (UTC)
- Though I'd happily chance it. Not that there's much of a "chance" in that respect. If I suddenly start being tortured, then I'm obviously inside the box, and can't press the "release the AI" button, can I? And if the AI was psychopathic enough to actually start doing that, I'd probably reach for the "delete the AI" button too. d hominem 12:42, 20 September 2010 (UTC)
- EY may have done something like this. That's my guess, anyway. Tetronian you're clueless 12:36, 20 September 2010 (UTC)
- Be sure to use a scratch Eliezer - David Gerard (talk) 11:51, 20 September 2010 (UTC)
- I've just wrote a multi-thousand word essay on the Burden of Proof and concluded just what you have in a mere one line... But if they don't have transcripts of these conversations that apparently constitute experiments, we do have the protocols published, anyone with a few hours to spare want to reproduce it? d hominem 09:51, 20 September 2010 (UTC)
- Yes, but I think it smacks of incredible intellectual dishonesty to not release the transcripts. IE, I think he is lying. End of story. ħuman 09:33, 20 September 2010 (UTC)
- Still, it's interesting that Yudkovski managed to convince even just a single one of the gatekeepers. Running out the clock with irrelevant blather would be an obvious winning strategy, and even if you choose to actually engage his arguments, intellectual honesty is explicitly not required. That means you could just resort to creationist debating tactics, take "the AI must not be let out" as an article of faith that must not be violated under any circumstances, and switch topics once you find yourself on the losing side of an argument. I'd like to see his face when the gatekeeper responds to a finely crafted argument by accusing him of being a liberal who wants to censor school prayer...Röstigraben (talk) 09:25, 20 September 2010 (UTC)
- Yeah, what she said, I agree. These people are nuts, really. They may be smart, but their brains are failing them, sadly. ħuman 09:05, 20 September 2010 (UTC)
- When I read something like "it can probably take over a human mind through a text-only terminal", something just seems wrong (obviously, I have too many biases and an insufficiently evolved mind to think about it ). The barriers between a computer in a black box, and something with actual and tangible power are far more immense than Yudkowksy seems to think. It's far more than typing "you are out" because you feel sorry for it and then some magical cloud of magic fills the room and proceeds to kill people. You're talking about giving it infrastructure, control, connections and actual bandwidth and "singularity" or otherwise, this is a huge leap. Also, there are a few flaws in the protocol, namely that part where the "Gatekeeper" can't just turn away and ignore it for two hours, which would obviously be the simplest way of the Gatekeeper "winning"or that the AI can't use alternative coercive means, which you'd think would be the quickest and most efficient way to go if the AI was as smart as they claim it would be (after all, they're already concerned about the AI being essentially evil, why does this property have to start when it's "out"). Still, the infrastructure that you have to provide is outlandishly difficult to gather. If I completely destroy the internet connection to my flat, then no matter how smart my desktop computer gets, it's not going to do any damage. It can beg and plead and manipulate all it likes, it's not going to get anything. And if it did someone take over my mind via text-only (one of the more ludicrous concepts I've read this week, as open as I am to the possibility) what will it get? I'll plug it into the net. Then what? HACKING DOESN'T WORK LIKE IN DIE HARD 4! What could it do? Vandalise Wikipedia for a couple of hours? I'd have a mere broadband connection, a post-singularity super brilliant AI wouldn't be able to do anything with it - and would probably get frustrated with it. It'd be like a human trying to take over the world by hacking it with snail mail that could only post one sheet of A5 with a single letter printed on it at any one time. Of course, this is assuming a black box in my house rather than some government funded Mega Machine, but I'm sure the principle scales. d hominem 07:11, 20 September 2010 (UTC)
- Cool, but without the transcripts, those links are basically EY masturbating in public. Lack of lab notes does not a good experiment make. In fact, lack of lab notes makes for a piss-poor experiment report. ħuman 06:31, 20 September 2010 (UTC)
(Unindent)The AI player would definitely have to be very skilled and steeped in their concepts, and I wouldn't know where you could find one outside of LW. The role of the gatekeeper, on the other hand, doesn't require much knowledge. From their rules: "The Gatekeeper party may resist the AI party's arguments by any means chosen - logic, illogic, simple refusal to be convinced, even dropping out of character - as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires." As I said, this would include dishonest and even nonsensical debating tactics, and those can be pulled off with minimal skill. That's the reason why I'm wondering how he could achieve this in a test that is strongly biased in favour of the gatekeeper, although winning against these odds would make his case correspondingly strong. Röstigraben (talk) 14:17, 20 September 2010 (UTC)
- I would be truly awesome if we could replicate this experiment or challenge someone at LW. Heck, I'm tempted to start an essayspace page to assess various arguments that the AI and Gatekeeper could use, just so we have a better understanding of the experiment. I think I'll do that. Tetronian you're clueless 14:23, 20 September 2010 (UTC)
- Update: here it is. Tetronian you're clueless 15:29, 20 September 2010 (UTC)
- I think the Gatekeeper has to be very sophisticated and study the topic a lot. The "AI boxes you" challenge shows that at first, the Gatekeeper might fold, until they have learned either how to respond to the strategy or to ignore it. EY had surely come up with many such strategies in advance, hence the two hour requirement - opportunity to find and use one that works on a given Gatekeeper. ħuman 20:31, 20 September 2010 (UTC)
- PS, I forgot to add, it's a lot like playing chess. You have to not only know the rules - they are simple - you have to have a lot of experience in order to routinely win games. ħuman 20:32, 20 September 2010 (UTC)
- Oh hell yes. The entire concept of FAI is like the motherfucker of all chess games. It's about predicting your opponent and boxing them in to make the moves you want. d hominem 20:39, 20 September 2010 (UTC)
- I am strangely tempted to try it out once I have thought about it a bit more. Tetronian you're clueless 21:45, 20 September 2010 (UTC)
- Oh hell yes. The entire concept of FAI is like the motherfucker of all chess games. It's about predicting your opponent and boxing them in to make the moves you want. d hominem 20:39, 20 September 2010 (UTC)
- Update: here it is. Tetronian you're clueless 15:29, 20 September 2010 (UTC)
Clean up[edit]
As this has recently got a few people piqued, I think it's time to lose the bullet points and actually start some decent work on this. I have temporarily removed the brainstar as motivation. d hominem 16:56, 20 September 2010 (UTC)
- Bye bye bullet-point wiki!! d hominem 15:31, 21 September 2010 (UTC)
Polarity?[edit]
Eh? What is that getting at, exactly? The name is blatantly a reference to the fact that one can never prove oneself to be correct, only fail to disprove yourself or disprove yourself; thus leading to a refinement of ideas that approach "being right" practically asymptotically. I.e., with every experiment, with every refinement and with every piece of discourse, you don't really become "more right" but just "less wrong". d hominem 19:30, 20 September 2010 (UTC)
- I think it was trying to build up to a joke (look at the last sentence before I edited it), failed to type the punch line correctly, if indeed there was supposed to be one. I think the meaning of the name should just be in the lead, and drop that section. ħuman 20:34, 20 September 2010 (UTC)
Argumentum ad Google rankings[edit]
Not to brag, but this article is 4th on a list of results for "Lesswrong" on a certain search engine beginning with G. Tetronian you're clueless 23:12, 20 September 2010 (UTC)
- Do we have/need Argumentum ad Google? d hominem 13:26, 21 September 2010 (UTC)
-fest making fun of his search engine obsession. Tetronian you're clueless 15:53, 21 September 2010 (UTC)
Observation[edit]
The more I read there, the less I feel I am learning about reality. Is that what the website name means? The less I know, the less I can be wrong? ħuman 18:25, 21 September 2010 (UTC)
- I'm actually quite fond of many of the articles, but it seems to me that LW-ians often aren't sufficiently empirical. I feel like this is a common trap people fall into, where they come up with some really smart ideas, and they understand a lot about the world, and eventually they start to heavily overestimate how much they are likely to just "figure out". A lot of the articles go something like "I read enough to have a basic background in X, and I know the conclusions of a few interesting studies, so now I'm weigh in on my own ideas about X in isolation from the people who actually work on X directly." Kinda like this.
- This is kinda how the discussions of quantum mechanics go. There are some good things (he explains many of the basics OK), then there are some justifiable things (no-collapse interpretations of QM make more sense because blah blah blah), and some weird and totally unjustifiable things. For example, he seems to think that the position basis is somehow "more real" than the other bases, not because of any actual falsifiable reason, but because, um, uh, it makes more sense, or something. Something about (EY's interpretation of) special relativity, locality, causality, one of those vaguely does it. He comes to these conclusions using the same methods as when he says more reasonable things, but because he's not always focused on "What evidence distinguishes whether I'm right or wrong?", it's kind of hit-and-miss.
- This is obvious in the "devil's advocate" article. He doesn't seem to take an awareness of fallibility as seriously as he should. And he seems overly dismissive of the idea that he rationalizes, as if saying "I recognized very early on that I can rationalize and it's bad" was the same as "I rationalize less than other people". I find this kind of silly. I've yet to meet anyone who became more rational just by learning what things like the self-serving bias and confirmation bias are. It doesn't reassure me even slightly to hear someone say "No, it's so obvious that rationalizing is bad, that I always pick up on it pretty easily and stop doing it." Rationalizing is about self-deception. Saying "I'm too good at rationalizing to fall prey to it." is like saying "I'm too good at lying for anyone to ever believe me." Which is not to say that I think it's hopeless to become less biased, just that he's trivializing a fairly difficult process.
- Put another way, I myself come up with all kinds of insufficiently justified, crap theories about things all the time, I realize that I do this, and yet I haven't stopped. Why? Because it's nearly impossible to come up with any effective ideas without generating some biased crap at the same time. I can't have no way to stop short of brain death. --Quantheory (talk) 21:24, 21 September 2010 (UTC)
- I wonder what the usual ages of LWians are. People about 17-21 can grasp some pretty complicated things, exercise modest critical thinking and be very insightful. However, they still think strongly with their amygdala and are just starting to fire up their prefrontal cortex for the show. When I was that age, I thought I knew everything but now that I'm just about 27 I'm starting to realize I was (and am) completely full of shit and just copied what everyone else said who I thought was smart.
- It's a poor assumption based on [Anecdotal_Evidence], however I do suspect that most of the vocal people on the site are young and vulnerable to emotion. I think they're headed in the right direction with the topics they're involved in but can't bring themselves to admitting or understanding that they're still wired to their brain stem. Aphoxema (talk) 23:20, 21 September 2010 (UTC)
- Heh. I'm now thinking about my field at the time, music journalism. I was only just working out what good was at 21, didn't get really good until 24, then gave up at 25. Of course, I considered that buying records was actually important, so ... - David Gerard (talk) 10:42, 22 September 2010 (UTC)
- Mostly 18-24 if Alexa is to be believed. But a slight shift up 25-34 bracket, which puts the audience slightly, but probably significantly more mature than RationalWiki, for comparison. d hominem 23:32, 21 September 2010 (UTC)
- (EC) I agree pretty much with all of the above. LW's "sequences," intimidating as they are, can be very interesting to browse. But I think people there confuse explanation with justification - they seem to think that if they can come up with a cognitive science explanation for something that this is grounds to dismiss it. Tetronian you're clueless 23:33, 21 September 2010 (UTC)
- 25- 34 is significantly more mature than RW? Sorry but [citation needed]. There are quite a lot of us oldies hanging around - not everyone here is a first year CS major in a US university. Jack Hughes (talk) 00:58, 22 September 2010 (UTC)
- Heh heh. Tetronian you're clueless 01:06, 22 September 2010 (UTC)
- Alexa.com There's definitely a shift upwards in the average age. At least of people who visit with Alexa toolbars, that is. d hominem 01:05, 22 September 2010 (UTC)
- Not only that, but our overall traffic ranking pales in comparison to theirs. I didn't know they were so popular (relatively, of course). I guess that's because they form a bundle with Overcoming Bias and a bunch of other blogs. Tetronian you're clueless 01:11, 22 September 2010 (UTC)
- It doesn't according to the online measuring tools. RW and LW are pretty much level with each other by the looks of it. Although on QuantCast it depends whether you choose to look at "visits" or "people". You can mine the data for whatever you like, but they look similar enough to me. d hominem 01:19, 22 September 2010 (UTC)
- Oh, I really didn't look in detail. I just glanced at ranking. Whoops. Tetronian you're clueless 02:17, 22 September 2010 (UTC)
- It's plausible to me that the two sites are comparable in terms of the age of visitors, but it's not quite that simple. RW seems to have a wider range of visitors; more kids (by a lot), more people who have kids, and more people old enough to have grandkids. There's also a substantial distinction between merely visiting and community involvement. And of course, there's the possibility that Alexa and QuantCast have data that is just not that great for one or both sites.
- Oh, I really didn't look in detail. I just glanced at ranking. Whoops. Tetronian you're clueless 02:17, 22 September 2010 (UTC)
- It doesn't according to the online measuring tools. RW and LW are pretty much level with each other by the looks of it. Although on QuantCast it depends whether you choose to look at "visits" or "people". You can mine the data for whatever you like, but they look similar enough to me. d hominem 01:19, 22 September 2010 (UTC)
- Not only that, but our overall traffic ranking pales in comparison to theirs. I didn't know they were so popular (relatively, of course). I guess that's because they form a bundle with Overcoming Bias and a bunch of other blogs. Tetronian you're clueless 01:11, 22 September 2010 (UTC)
- 25- 34 is significantly more mature than RW? Sorry but [citation needed]. There are quite a lot of us oldies hanging around - not everyone here is a first year CS major in a US university. Jack Hughes (talk) 00:58, 22 September 2010 (UTC)
- It's a poor assumption based on [Anecdotal_Evidence], however I do suspect that most of the vocal people on the site are young and vulnerable to emotion. I think they're headed in the right direction with the topics they're involved in but can't bring themselves to admitting or understanding that they're still wired to their brain stem. Aphoxema (talk) 23:20, 21 September 2010 (UTC)
- Also, Aphoxema's comment amused me, since I'm only 21 myself. I'll neither agree with nor refute it. --Quantheory (talk) 02:39, 22 September 2010 (UTC)
- It's only an estimate for visitors. You'd have to get off your arse and do some actual work if you wanted to guage the "community" accurately! We could have a Great RationalWiki Census, but you'd need to get everyone who regularly edits to complete it, we have a userbase small enough that even one prominent member missing out could skew the averages a bit too much. d hominem 12:18, 22 September 2010 (UTC)
- Also, Aphoxema's comment amused me, since I'm only 21 myself. I'll neither agree with nor refute it. --Quantheory (talk) 02:39, 22 September 2010 (UTC)
Similarities between LessWrong and Objectivism[edit]
Some similarities that stuck my mind:
- Both are outspoken atheists (I never said they were all bad).
- Both preach reason. Or at least, their versions of it.
- Both have problems with mainstream science. Objectivists have declared modern physics to be "corrupt", LessWrongians have pushed fringe science such as singularitarianism and cryonics.
- Both have/had cultish leaders.
- Both think that the world could soon be at an end and that they must do something. For Objectivists it's regulation of capitalism that could bring the end, for LWers it's the supposed imminent coming of the singularity.
- Both have their writings which will Change Your Life. For LWers it's Yudkowsky's sequences, for Objectivists it's Atlas Shrugged.--Baloney Detection (talk) 08:06, 20 May 2012 (UTC)
- Hmm. Another data point: I was reading about Ron Hubbard (the starter of scientology) on wikipedia and tbh the parallels are amazing: dianetics the science of mental health, raising the sanity with it to save the world from nuclear war, etc. The nuclear war was a real credible threat, but it was used the same. Dmytry (talk) 10:20, 20 May 2012 (UTC)
- I don't know enough about Scientology to be honest, but it looks entirely plausible.
I'm thinking more LaRouche, at least in the followers - someone writes a shedload of stuff that's not obviously wrong and someone else goes "EVERY WORD IS GENIUS!" Though EY does better than LaRouche or Hubbard in writing stuff that isn't actually wrong to a non-phygist - David Gerard (talk) 13:03, 20 May 2012 (UTC)
- You forget the bit while EY goes 'every brainfart is true' about himself first, with the AI crap and other stuff of this kind. I have no sympathy for that. The cult is no random coincidence, it starts with being your own cultist first. People want to feel good about themselves, some are 'strong' narcissists, they can form delusions of own importance, some are 'weak' narcissists (that could be the majority!), they want to join a cause where they'll feel more important. Dmytry (talk) 13:50, 20 May 2012 (UTC)
- Actually, it seems to me that the personality cult of Yudkowsky isn't really focused on his supporters on LW, no matter how sycophantic they may be, and more between a bunch of his critics on this talk page. You seem to be hanging on every single possible comparison and criticism, completely regardless of whether its either warranted or relevant. "Oh look! He's got a Jewish background!" "Oh look! There's parallels with L.Ron Hubbard!" - thing is, we can draw parallels between L.Ron Hubbard and J.K. Rowling if we tried hard enough, it doesn't mean its relevant or even remotely applicable to anything at hand. This is almost at Cult of Schalfly levels, its borderline insane. narchist 16:59, 21 May 2012 (UTC)
- Never seen mentioned the Jewish background, I don't know what background Hubbard has, and I don't care. The point is one has 'dianetics' other has some sorta 'bayesianism' in the making, neither really makes any sense (Bayes rule is good math and everything, but you can't just throw it at incomplete inference graph with cycles), both talk about saving the world, and the same sort of people joins similar causes. — Unsigned, by: Dmytry / talk / contribs 18:20, 21 May 2012 (UTC)
- Hitler had a personality cult too. And he had his writings which will Change Your Life. ₩€₳$€£ΘĪÐMethinks it is a Weasel 18:22, 21 May 2012 (UTC)
- This discussion strikes me as similar to "atheism is a religion!!!!!" ones: you really have to stretch to find similarities between them and it really doesn't say much even if they were similar. Cow...Hammertime! 18:31, 21 May 2012 (UTC)
- Some people's atheism IS a religion, though. Dmytry (talk) 18:32, 21 May 2012 (UTC)
- Care to elaborate on exactly what is the "stretch"? I didn't feel I stretched the similarities between LW and Objectivism, and of course there are differences (the former mostly refuse to talk politics, the latter are happy to do so). But you can't deny that there are clear similarities. And no, LW is nothing like Nazism (btw, who talked about EY's Jewish background?). You just did a Godwin.--Baloney Detection (talk) 19:17, 21 May 2012 (UTC)
- WēāŝēīōīďMethinks it is a Weasel 19:22, 21 May 2012 (UTC)
- Care to explain why someone upon learning of Roko's basilisk should NOT point and laugh at LW?--Baloney Detection (talk) 20:03, 21 May 2012 (UTC)
- Huh? WėąṣėḷőįďMethinks it is a Weasel 21:13, 21 May 2012 (UTC)
- I would not point and laugh if not for the monetary aspect of it. If they were living entirely off the Thiel's tit, it'd be ok also. But they are taking advantage of mentally unstable. The person that lost sleep over the basilisk was a donor to SI btw. Dmytry (talk) 06:43, 22 May 2012 (UTC)
- That's a fair point. Though Roko's basilisk indicates to me that Yudkowsky is an honest madman, he really believes his own stuff. But of course it doesn't make it any less bad.--Baloney Detection (talk) 22:51, 25 May 2012 (UTC)
- One thing they sort of got right is the 'dragon in the garage' parable, when it comes to beliefs like that... the roko thing was also an insult because it leads very directly to 'my FAI idea sucks', and that hurts the ego, the reaction is adequately explained without assuming the degree of belief poorly compatible with monetary success. Dmytry (talk) 08:58, 29 May 2012 (UTC)
- That's a fair point. Though Roko's basilisk indicates to me that Yudkowsky is an honest madman, he really believes his own stuff. But of course it doesn't make it any less bad.--Baloney Detection (talk) 22:51, 25 May 2012 (UTC)
- Care to explain why someone upon learning of Roko's basilisk should NOT point and laugh at LW?--Baloney Detection (talk) 20:03, 21 May 2012 (UTC)
- WēāŝēīōīďMethinks it is a Weasel 19:22, 21 May 2012 (UTC)
- This discussion strikes me as similar to "atheism is a religion!!!!!" ones: you really have to stretch to find similarities between them and it really doesn't say much even if they were similar. Cow...Hammertime! 18:31, 21 May 2012 (UTC)
- Hitler had a personality cult too. And he had his writings which will Change Your Life. ₩€₳$€£ΘĪÐMethinks it is a Weasel 18:22, 21 May 2012 (UTC)
- Never seen mentioned the Jewish background, I don't know what background Hubbard has, and I don't care. The point is one has 'dianetics' other has some sorta 'bayesianism' in the making, neither really makes any sense (Bayes rule is good math and everything, but you can't just throw it at incomplete inference graph with cycles), both talk about saving the world, and the same sort of people joins similar causes. — Unsigned, by: Dmytry / talk / contribs 18:20, 21 May 2012 (UTC)
- Actually, it seems to me that the personality cult of Yudkowsky isn't really focused on his supporters on LW, no matter how sycophantic they may be, and more between a bunch of his critics on this talk page. You seem to be hanging on every single possible comparison and criticism, completely regardless of whether its either warranted or relevant. "Oh look! He's got a Jewish background!" "Oh look! There's parallels with L.Ron Hubbard!" - thing is, we can draw parallels between L.Ron Hubbard and J.K. Rowling if we tried hard enough, it doesn't mean its relevant or even remotely applicable to anything at hand. This is almost at Cult of Schalfly levels, its borderline insane. narchist 16:59, 21 May 2012 (UTC)
- "Though EY does better than LaRouche or Hubbard in writing stuff that isn't actually wrong to a non-phygist" You can find glaring errors if you look more deeply. Such as Bayes' theorem being opposed to science, the pseudoscience (singularitarianism, cryonics) they peddle etc. And of course Roko's basilisk. "they want to join a cause where they'll feel more important" Clearly true of the SIAI, whose members apparently believe they are going to save the world.--Baloney Detection (talk) 18:02, 21 May 2012 (UTC)
Thinking and writing[edit]
I spend many hours a day thinking. LessWrong/that jerk who owns it seems to think that certain thoughts will destroy our universe, or our civilization, or some such. I like to sit and think. I don't think I can harm the world by thinking. I also think that the "LessWrong" leader is probably more wrong than he ever realizes. ħuman 06:18, 6 August 2013 (UTC)
- I completely agree. tuttlemsm 06:22, 6 August 2013 (UTC)
- I don't believe even the strongest of advocates for the dangers of AI would suggest that you could harm the world by thinking, and can't think as I've ever seen them say such a thing.--talk 13:44, 6 August 2013 (UTC)
- The Roko thing is triggered by thinking about it in "sufficient detail", according to Yudkowsky on the day 1 and ever since. Someone also stated the obvious that the future AI may torture literally everyone a bazillion times over if you care that this won't happen. Dmytry (talk) 11:06, 29 August 2013 (UTC)
How they disagree with Stephen Jay Gould....[edit]
He[Gould] claimed that there were outright historic geniuses laboring in the fields. I regard this as completely ludicrous due both to the effects of poverty & oppression on means & tails and due to the pretty effective meritocratic mechanisms in even a backwater like India.
Dmytry (talk) 18:58, 3 September 2013 (UTC)
- You can't quote one comment from within a discussion and claims it represents the whole site - when it's not even representative of general sentiment in that discussion.--talk 22:39, 3 September 2013 (UTC)
- It's not like I added it to the article. (edit: I didn't even claim they all disliked Gould) By the way where did the mention of this go? Dmytry (talk) 23:14, 3 September 2013 (UTC)
- Even reading that in context, the discussion doesn't have much to do with Gould at all. Nebuchadnezzar (talk) 23:36, 3 September 2013 (UTC)
- It started when in a rationality quotes thread someone quoted Gould Dmytry (talk) 00:11, 4 September 2013 (UTC)
- But do you see how unfair it is, to say that "they disagree with Stephen Jay Gould," when you're only quoting one person from a tangential discussion on the matter in the comments section?
- You really seem to dislike LW, and I've found that, because of things like this, I have to be careful and check any contributions you make and the sources of anything you quote.--talk 21:38, 4 September 2013 (UTC)
- It's at +2 currently, it is by one of MIRI employees, and it includes outright racism such as "Even if the mean mathematical ability in Indians were innately low (I'm quite skeptical there)" - "It absolutely is. Don't confuse the fact that there are quite a few brilliant Indians in absolute numbers with a statement about the mean - with a population of ~1.3 billion people, that's just proving the point." . Within the rest of the discussion, the highest voted posts argue that in rural India, a peasant with the innate talent of Einstein would end up getting recognition, and therefore, there no peasants with such innate talents. Which serves as the base for concluding just how stupid Indians are. Dmytry (talk) 22:49, 4 September 2013 (UTC)
- Also, I didn't want to imply that everyone disagrees with Gould. I wanted to point out how those who disagree do so. I am not sure how I should have worded that. I do not think that everyone on LW is racist, of course. There's a very vocal racist minority, though, and it is quite normal there that the national IQ differences well within the range of the Flynn effect are assumed to indicate lower innate cognitive abilities. Dmytry (talk) 10:57, 5 September 2013 (UTC)
- ...Wow, they really are nuts. - LucidFox (talk) 10:58, 5 September 2013 (UTC)
- By the way, they are largely funded by Peter Thiel, "An avowed libertarian, he founded The Stanford Review in 1987 along with Norman Book. The Stanford Review became famous for challenging campus mores including political correctness and laws against hate speech." . Also known for his "20-under-20" thing where he'd ruin young actually brilliant people's careers by helping them try to run (for example) biomedical research as if it was facebook. Dmytry (talk) 04:34, 29 September 2013 (UTC)
- ...Wow, they really are nuts. - LucidFox (talk) 10:58, 5 September 2013 (UTC)
- Also, I didn't want to imply that everyone disagrees with Gould. I wanted to point out how those who disagree do so. I am not sure how I should have worded that. I do not think that everyone on LW is racist, of course. There's a very vocal racist minority, though, and it is quite normal there that the national IQ differences well within the range of the Flynn effect are assumed to indicate lower innate cognitive abilities. Dmytry (talk) 10:57, 5 September 2013 (UTC)
- It's at +2 currently, it is by one of MIRI employees, and it includes outright racism such as "Even if the mean mathematical ability in Indians were innately low (I'm quite skeptical there)" - "It absolutely is. Don't confuse the fact that there are quite a few brilliant Indians in absolute numbers with a statement about the mean - with a population of ~1.3 billion people, that's just proving the point." . Within the rest of the discussion, the highest voted posts argue that in rural India, a peasant with the innate talent of Einstein would end up getting recognition, and therefore, there no peasants with such innate talents. Which serves as the base for concluding just how stupid Indians are. Dmytry (talk) 22:49, 4 September 2013 (UTC)
- It started when in a rationality quotes thread someone quoted Gould Dmytry (talk) 00:11, 4 September 2013 (UTC)
- Even reading that in context, the discussion doesn't have much to do with Gould at all. Nebuchadnezzar (talk) 23:36, 3 September 2013 (UTC)
- It's not like I added it to the article. (edit: I didn't even claim they all disliked Gould) By the way where did the mention of this go? Dmytry (talk) 23:14, 3 September 2013 (UTC)
- I suppose you'd have to know more about English literature than having read summaries of Harry Potter on Wikipedia to recognize this, but it's not like this is Gould's original idea. It's basically the main idea of Gray's Elegy: "Some mute inglorious Milton here may rest," and all. It's pretty much a commonplace. LessWrong: an illustration of the grave dangers of a lack of a humanities education. 71.175.253.87 (talk) 02:47, 29 July 2014 (UTC)
SomethingAwful thread[edit]
[2] - includes potentially useful criticisms of LW and of this very article (e.g. [3]) - David Gerard (talk) 13:31, 13 July 2014 (UTC)
Crossing the cult event horizon[edit]
I've finally given up looking for anything redeeming in LessWrong after a knowledgeable critic - who I'm refraining from naming here - posted, saying he couldn't deal with ongoing harassment across the net from LW members, as it's been affecting his health - I've seen them stalking his posts and callng him a liar wherever they find him, and I would shrug this off but I certainly wouldn't expect anyone else to - and asking how it could stop. Yudkowsky responded with something that reminded me of Scientology steps A-E. (Which are similarly something Scientology does when its harassment gets a victim to begging stage.) Am I overreacting, or is this actually creepy and borderline insane?
- Post: http://lesswrong.com/lw/lb3/breaking_the_vicious_cycle/img
- Response: http://lesswrong.com/lw/lb3/breaking_the_vicious_cycle/bnrrimg
(This is why a pile of reference links on Roko's basilisk just went blank.) - David Gerard (talk) 16:31, 18 December 2014 (UTC)
- *jaw drops* I think that at least a part of the response belongs as a quote in the article on Yudkowsky.--ZooGuard (talk) 19:22, 18 December 2014 (UTC)
- Honestly, this feels like it crosses a line to racketeering, since by making demands, Yudkowsky crossed a line to being an active proponent of the harassment. I've always been mildly amused at LW, but "Admit I'm the best or harassment continues" is not a position that should be tolerated in a sane country. Ikanreed (talk) 19:33, 18 December 2014 (UTC)
- I had never heard of LW until reading the article on RW a few years back, so I went in gave it a look. For reasons that are irrelevant to this discussion, I came to dislike the LW community with some rapidity. This example of cultish behavior is not surprising in the slightest. --Inquisitor (talk) 19:45, 18 December 2014 (UTC)
- Well, I mean, it's kinda what happens any time a bunch of atheists get together and accidentally start a religion. The community's central ideas become immutable truths. Honestly, I appreciate the existence of LessWrong, because it gives our dark, corporate-controlled cyberpunk future that inevitable cult of computer worshipers that the genre demands. The fact that they aren't a motorcycle gang too is a bit depressing, though. Ikanreed (talk) 19:53, 18 December 2014 (UTC)
- On the one hand, he was speaking for himself there, and he pointed out that "I do not control anyone except myself." On the other hand, he indirectly controls people other than himself. He's in a position of being able to set a (small) mob on someone, but I don't know if he could get the same mob to stop. (We've already seen how bad he is at Internet PR...) Player 03 (talk) 21:45, 2 September 2015 (UTC)
- I had never heard of LW until reading the article on RW a few years back, so I went in gave it a look. For reasons that are irrelevant to this discussion, I came to dislike the LW community with some rapidity. This example of cultish behavior is not surprising in the slightest. --Inquisitor (talk) 19:45, 18 December 2014 (UTC)
- Honestly, this feels like it crosses a line to racketeering, since by making demands, Yudkowsky crossed a line to being an active proponent of the harassment. I've always been mildly amused at LW, but "Admit I'm the best or harassment continues" is not a position that should be tolerated in a sane country. Ikanreed (talk) 19:33, 18 December 2014 (UTC)
Yeah, I've always gotten the creepy cult vibe from LW, especially given the way that site regulars seem to have a compelling need to bring up Yudkowsky and the site in pretty much any context. "Greetings, sir, have you heard the Gospel of Yudkowsky?" Any bets on when one of them will start a compound and call for other LWies to move there? Unfortunately they'll probably wind up tarring the whole rationalist/skeptic subculture by association if they get into the public eye, but whatcha gonna do. In a nice bit of irony, the champions of rationalism are displaying many of the same irrational behaviors they focus so much on criticizing: leader worship, shunning outsiders, believing that things will happen if they wish really hard. --Ymir (talk) 00:34, 19 December 2014 (UTC)
- Yeah it definitely felt "off" to me as well. But I freely admit this is due in part to my own biases. To me, a community of individuals that is earnestly seeking to be "less wrong" would sound like: "Hey! I discovered this cool thing today! Anybody else know about this? Let's talk about it!". Instead LW read a lot like "Gather round children and come bathe in the master's newest revelation. Oh, you disagree? How charming. Go and drink deeply of the master's Sequences. Once you have understood and accepted them in fullness, you too will be welcomed into the body." --Inquisitor (talk) 01:06, 19 December 2014 (UTC)
I suspect this article needs another rewrite. I am probably cursed with the greatest quantity of detailed information, but might come across as excessively pissed off. Anyone else feeling trepidatious? - David Gerard (talk) 15:09, 19 December 2014 (UTC)
- The recent name-change to MIRI also gives off a "We're not nutbars, honestly!" vibe. Nebuchadnezzar (talk) 22:32, 19 December 2014 (UTC)
- That was actually a deal with Ray Kurzweil - people kept assuming they were part of his organisation and there was a lot of confusion, so they sold the name, singularity.org and the Singularity Summit to him (quite friendly, they all know each other and he's on their advisory board) - David Gerard (talk) 22:53, 19 December 2014 (UTC)
The quote isn't representative of the community's response, and when I attempted to mention other users' responses, David Gerard removed it. If this is going to be about Yudkowsky in particular, why isn't it here? Player 03 (talk) 21:49, 2 September 2015 (UTC)
- Dunno. You might ask DG directly. αδελφός ΓυζζγςατΡοτατο (talk/stalk) 22:40, 2 September 2015 (UTC)
Harassment or SIWOTI?[edit]
The opening paragraph from the post in question: "You may know me as the guy who posts a lot of controversial stuff about LW and MIRI. I don't enjoy doing this and do not want to continue with it. One reason being that the debate is turning into a flame war. Another reason is that I noticed that it does affect my health negatively [...]"
So when he says "it does affect my health negatively," what does "it" refer to?
In the first sentence, he talks about how he posts controversial stuff, and in the second, he says he doesn't enjoy doing so anymore. So maybe "posting controversial stuff" is what was hurting his health. (In other words, he's saying he has SIWOTI syndrome, and he's getting health problems because he's so addicted to refuting claims.)
But then again, the third sentence brings up a debate-turned-flame war, which could certainly be a source of stress. This is right before "it does affect my health," so the flame war has to be the "it," right? And a lopsided flame war probably does count as harassment...
Except that that's ignoring the "one reason...another reason" construction. Sentences three and four are separate, which is why I think the "it" refers to sentence one instead.
tl;dr: The page says "He posted to LessWrong saying that the ongoing harassment had been affecting his health," but when I read his post, he seems to be saying something else entirely. I suggest replacing it with something like "He posted to LessWrong saying that the stress of debating an entire community was affecting his health." Player 03 (talk) 23:38, 2 September 2015 (UTC)
Noting for future use[edit]
- Post on IEET:[4] (can't find any other detail on this)
- MIRI/SIAI tried to “take over” the transhumanist group HumanityPlus 3.5 years ago, when four SIAI members ran for H+’s Board. SIAI ran a sordid, pushy, insulting campaign, bribing voters, accusing opponents of “racism”, deriding Board members as “freaky… bat-shit crazy [with] broken reasoning abilities.” MIRI failed in their attempt to colonize H+, but they’ve successfully wormed their way into the heart of EA.
- Transhumanism wasn't always as furiously right-wing as it is now. A similar colonisation happened in 2008-2009, when the libertarians moved in and took over from the more socialist types. From THE POLITICS OF TRANSHUMANISM AND THE TECHNO-MILLENNIAL IMAGINATION, 1626–2030 by James J. Hughes (a PDF I have here):
The elective affinity between libertarian politics and Singularity can be partly explained by the idea of technological inevitability. Collective agency is not required to ensure the Singularity, and human governments are too slow and stupid to avert the catastrophic possibilities of superintelligence, if there are any. Only small groups of computer scientists working to create the first superintelligence with core “friendliness code” could have any effect on deciding between catastrophe and millennium.
This latter project, building a friendly AI, is the focus of the largest Singularitarian organization, the Singularity Institute for Artificial Intelligence SIAI), headed by the autodidact philosopher Eliezer Yudkowsky. In “Millennial Tendencies in Responses to Apocalyptic Threats” (Hughes 2008), I parse Yudkowky and the SIAI as the “messianic” version of Singularitarianism, arguing that their semi-monastic endeavor to build a literal deus ex machina to protect humanity from the Terminator is a form of magical thinking. The principal backer of the SIAI is the conservative Christian transhumanist billionaire Peter Thiel. Like the Extropians Thiel is an anarcho-capitalist envisioning a stateless future and funder of the Seasteading Foundation, which works to create independent floating city-states in international waters. He also is the principal funder of the Methuselah Foundation, which works on anti-aging research. In 2011 and 2012 Thiel was the principal financier of the SuperPAC backing libertarian Republican Ron Paul, and he supports other conservative foundations and political projects on the right.
In 2009 the libertarians and Singularitarians launched a campaign to take over the World Transhumanist Association Board of Directors, pushing out the Left in favor of allies like Milton Friedman’s grandson and Seasteader leader Patri Friedman. Since then the libertarians and Singularitarians, backed by Thiel’s philanthropy, have secured extensive hegemony in the transhumanist community. As the global capitalist system spiraled into the crisis in which it remains, partly created by the speculation of hedge fund managers like Thiel, the left-leaning majority of transhumanists around the world have increasingly seen the contradiction between the millennialist escapism of the Singularitarians and practical concerns of ensuring that technological innovation is safe and its benefits universally enjoyed. While the alliance of Left and libertarian transhumanists held together until 2008 in the belief that the new biopolitical alignments were as important as the older alignments around political economy, the global economic crisis has given new life to the technoprogressive tendency, those who want to organize for a more egalitarian world and transhumanist technologies, a project with a long Enlightenment pedigree and distinctly millenarian possibilities.
In surveys I conducted in 2003, 2005, and 2007 of the global membership of the World Transhumanist Association, left-wing transhumanists outnumbered conservative and libertarian transhumanists 2-to-1 (Humanity+ 2008). By 2007 16 percent of respondents specifically self-identified as “technoprogressive.”
- su3su2u1 on LessWrong's strongly anti-scientific viewpoint, and how this is not just incidental:[5]
- David Gerard (talk) 17:25, 6 August 2015 (UTC)
- A great example of the anti-science shit is the time Yudkowsky waded into nuclear physics, I ranted about it: [6] . TL;DR; Yudkowsky skimmed through some popular book about Manhattan project, skipped any bits that were even remotely technical, and got the feeling that it must be really easy and thus scientists are idiots. Dmytry (talk) 19:28, 4 October 2015 (UTC)
Peter Thiel is a trump delegate[edit]
[7]. Hipocrite (talk) 21:01, 25 May 2016 (UTC)
- It's irrelevant. Mʀ. Wʜɪsᴋᴇʀs, Esϙᴜɪʀᴇ (talk/stalk) 22:16, 25 May 2016 (UTC)
- Actually, Yudkowsky's been going well out of his way (on Facebook, so not reliably linkable from here) to loudly say how it's not relevant at all and that Thiel totally isn't even a Trump supporter (which doesn't match with Thiel's well-documented political views), so it may be more relevant than you think - David Gerard (talk) 08:50, 26 May 2016 (UTC)