Talk:LessWrong

From RationalWiki
Jump to navigation Jump to search
Icon internet.svg

This Internet related article has been awarded BRONZE status for quality. It's getting there, but could be better with improvement. See RationalWiki:Article rating for more information.

Copperbrain.png

Archives for this talk page: , (new)


Doing something for people distressed by the basilisk[edit]

moved to the shiny new Talk:Roko's basilisk

Of course the LessWrong crowd is pro-gamergate[edit]

Source.204.11.142.106 (talk) 15:52, 7 June 2016 (UTC)

which is the correct way to be. — Unsigned, by: 70.198.56.162 / talk

LW on RW[edit]

[1]: as 'the rat officer (tailed grade)' would say 'Meow.' 82.44.143.26 (talk) 19:37, 1 November 2016 (UTC)

This is known -- it's from 2012. Sir ℱ℧ℤℤϒℂᗩℑᑭƠℑᗩℑƠ (talk/stalk) 21:28, 1 November 2016 (UTC)

Referring to the Less Wrong article or [2]/[3]? (Mild timewaster warning on the latter) 82.44.143.26 (talk) 18:05, 2 November 2016 (UTC)

Both. Mʀ. Wʜɪsᴋᴇʀs, Esϙᴜɪʀᴇ (talk/stalk) 21:24, 2 November 2016 (UTC)

I don't understand this section[edit]

The inner reaches of LessWrong can get a little ... insular. Here's Michael Vassar, former President of MIRI and an active member of the Bay Area transhumanist community, talking to Harper's in 2014:[1]

It was getting late. I asked him about the rationalist community. Were they really going to save the world? From what?

“Imagine there is a set of skills,” he said. “There is a myth that they are possessed by the whole population, and there is a cynical myth that they’re possessed by 10 percent of the population. They’ve actually been wiped out in all but about one person in three thousand.” It is important, Vassar said, that his people, “the fragments of the world,” lead the way during “the fairly predictable, fairly total cultural transition that will predictably take place between 2020 and 2035 or so.”

Using these principles, Vassar founded failed medical advice startup MetaMed, which put the assumption that rational thinking from first principles would beat the conventions of the field, and rapidly went bust.

What about the quote is "insular"? What "principles" from the quote were used in MetaMed? The quote seems to be saying, "actually us rationalists are a super-duper minority, also the apocalypse is coming" -- not what either of the taglines say. 32℉uzzy; 0℃atPotato (talk/stalk) 23:51, 5 June 2017 (UTC)

the "his people" part implies that the rare rational people are the people on the site and everyone else is not only helplessly irrational but helpless when it comes to the problems of the future.Vorarchivist (talk) 13:58, 12 October 2017 (UTC)

EA orgs praising AI pseudoscience charity. Is it useful?[edit]

The BoN seems to think so, I'm not so sure, but I'm definetly not committed to leaving it out. My reasoning is that Effective Altruism is kind of a captured movement belonging more-or-less wholesale to lesswrong types who take the most interest in it. The criticism we include from them represents voices that want the original intention of EA to be carried out, and the stuff added is just LWers committing to their delusion.

That's a very narrative take on why I like the way the article is, and frankly I'm not that invested in it. Your reasons are okay, I'm gonna make this case just in case anyone else cares and leave your change in. ikanreed 🐐Bleat at me 19:22, 21 August 2018 (UTC)

Comment: The article in its current view wants readers to believe that even EA charity evaluators do not believe that MIRI is a good orginasation. However, while this might have been the case in 2012 it is no longer true now. Leaving the article as it is misinforms readers who put weight on the view of EA organisations. It might be the case that the community of this wiki does not believe in EA but that is not the case for everyone who might read it. This is misleading and wrong. If it is the consensus view of this wikis community that EA evaluators are wrong you should not mention them, bu citing them when it supports your case against MIRI but not otherwise is not rational discourse. — Unsigned, by: 192.76.8.65 / talk / contribs

Can't really disagree with that reasoning, just because it doesn't tell the story I want. Go ahead and make that change. ikanreed 🐐Bleat at me 02:28, 22 August 2018 (UTC)
The reason is that the LWers have been pushing non-AI-risk people out. It is worth noting that Karnofsky tried to walk back his critique later. But the details belong on effective altruism - David Gerard (talk) 16:20, 24 August 2018 (UTC)
That is just a plain lie ad a minute of googling would have made that obvious: The share of LWers in the EA movement is lower then ever and LW dropped out of the top 5 things that get people into EA http://effective-altruism.com/ea/1h5/ea_survey_2017_series_how_do_people_get_into_ea/ but I give up on changing these articles now. It is not worth my time to fight the 99% of this community who just have an unexplainable hate for LW and anything remotely related to it. It is very unfortunate that people who care about rational discourse and end up here by accidet (like I did) have to deal with these kind of misinformation but that can't be changed. But I really would appreciate if someone could explain me where this hate comes from? I know that this wiki is not about rationality but has a left/progressiv background wheareas some (but by far not the majority and certanly not me) of LW contributers are libertatian but is that really a reason to hate a whole movement where people are obviously trying to do good? Are there not worse people you could direct your hate at if you need an enemy you can bash to feel rightious? It makes me really sad to see people whos values I fully support on most topics (being a left leaning atheist) use the same tools of censorship and misinformation that I thought were exlusive to fringe right wing movements. I do not expect a reply (i rather expect this comment to be removed as well) but I am genuinely curious.
Edit: It jsut occured to me that I might have fallen into the trap of assuming people are evil when misinformation explains things equally well. I am quite new to EA and have never been following lessWrong very actively so I cannot say how these communities used to be when you formed your terrible opinion of them, So I would encourage anyone who reads this to go to a local EA meatup and maybe open lesswrong once. I am reasonably sure that you cannot possibly go to EA meetups and continue to have the impression that the people there are somehow enemies of your values. You might disagree with some of their goals and methods to change things but the people there are just genuinely good people. — Unsigned, by: 192.76.8.65 / talk
Good and Evil are ultimately fictitious labels that humans arbatrarily assign to individuals and groups based on their biases and worldviews. ☭Comrade GC☭Ministry of Praise 19:54, 24 August 2018 (UTC)
P.S. You would be surprised how many cranks end up being labeled as "good people" by their supporters. Ultimately such defense attempts come off as more childish and naive than anything else. ☭Comrade GC☭Ministry of Praise 19:59, 24 August 2018 (UTC)
yeah, I think any edits this IP wants to make will need to go via the talk page first - David Gerard (talk) 20:33, 24 August 2018 (UTC)
You guys did both ignore that I provided evidence that the last edit made was based on wrong fact and you still dont care? LW does not run EA and the influence of LW people in EA is declining, nevertheless the site now says that LW people have bullied everyone else out of EA because the truth is that people not related to LW now believe in MIRIs work and you don't like that. I find it an acceptable view to not care what the EA movement thinks and say its al bogus but again in this case you should not selectively quote EA Charity evaluators when they support your view. I am still really confused why you hate LW so much that you have stopped caring about the truth because as far as I can tell other parts of this wiki do usually stick to facts even about things this community does not like. Can anyone elaborate on that? — Unsigned, by: 192.76.8.65 / talk
Two points: A) Please sign your posts. B) I'm waiting for an agreement on the talkpage. ☭Comrade GC☭Ministry of Praise 19:03, 25 August 2018 (UTC)
I do not know how to sing posts. I still find it weird that there needs to be agreement before facts based ons statistics can be added to an article while their did not need to be agreement for the last edit which made a false claim absed on the impression of 1 guy at a EA conference. Don't you kind of feel that that is a double standart? — Unsigned, by: 192.76.8.65 / talk / contribs

As someone who has been reading LW for the past month now, I do feel that the article is overly critical of it. There is a lot of good stuff in there; yes it's long, but I am sure 99% of LWers haven't read the entirety of the Sequences. Eliezer Yudkowsky is also a very good author; that's why people read any of it at all. Also, there are unique (and good!) parts of the Sequences which you do not find in Kahneman or wherever; stuff like Law of Conservation of Expected Confidence, Clusters in Thingspace, how words can be wrong, Trying to Try, Leave a Line of Retreat, Mysterious Answers to Mysterious Questions, thinking of every detail as a burden, splitting up your ideology; these are all I can think of right now. I do not know where else I would find such techniques to actually master rationality other than LessWrong; it's not simply discussion of how brains are biased or philosophy or whatever, these are techniques which go beyond "Fundamental Attribution Error: uh, just be nice to people, I guess."

Admittedly Yudkowsky sometimes goes crazy. Weird tweets, saying that anyone who dislikes Scott Alexander is a bad person etc. Many posts on LW actually do criticise Yudkowsky, like the 2nd most upvoted post on LW is a guy arguing against him, and I also remember a post with 200 upvotes that criticised Yudkowsky saying stuff like "yeah I predicted all of this AI stuff" but he doesn't have a Metaculus account and hasn't created one either. Also he has written a lot of stuff on cults, how the cult of Ayn Rand's Objectivism works etc.; I do not think anyone who has read the Sequences would actually be part of a "cult". Also you don't really see posts by Yudkowsky on the main front page, only on the side bar.

Dark Rationalists are a tiny minority of LW, as evidenced from surveys.

The BoN says "I would encourage anyone who reads this to go to a local EA meatup and maybe open lesswrong once" yeah well I did open LW. It's full of AI shit. I find it dissapointing that the article doesn't go deeper into the AI-doomerism which is now a huge part of LW. Yudkowsky refuses to split the AI discussion over to the other forum which was made specifically for that purpose, and I suspect the reason may be to bring in more people into the movement who were only looking to think sharper.

I am not sure if BoN is correct about EA. If you go to the EA forums you still see an entire section on AI safety. I would say that there are 2 reasons less and less people are coming into EA: 1. EA movement is growing, and GiveWell sponsors some educational Youtube channels now 2. Most people on LW already know about EA, but LW isn't growing (at least, not as fast as EA).

anyways these were all the thoughts I had, um, that's it yeah gtg.𝗦𝗾𝗿𝘁-𝟭 talk stalk 12:55, 17 February 2024 (UTC)

The Sequences do not contain unique ideas, and they present the ideas they do contain in misleading ways using parochial language. The "Law of Conservation of Expected Confidence" essay, for instance, covers ideas that are often covered in introductory philosophical methods or critical thinking courses. There is no novelty either in the idea that your expected future credence must match your current credence (otherwise, why not update your credence now?), nor in the idea that if E is evidence for H, then ~E is evidence for ~H (though E and ~E may have very different evidential strength), and Yudkowsky's treatment is imprecise and, in combining multiple points, muddles things more than it illuminates them. Besides that, the former notion has sparked substantial controversy in epistemology, owing to cases wherein people apparently can reasonably expect to have their minds changed without changing them right now. While popular, Bayesianism is not univocally accepted by epistemologists, and it's not because they're irrational.
For another example, "Clusters in Thingspace" has a number of issues. Most simply, it seriously undersells Aristotle's ability to handle a nine-fingered person. Certainly, if you make 'has ten fingers' part of the definition of human, then you will be able to infer that a person without ten fingers is not a human; nobody, though, has ever seriously put forward such a proposal. For Aristotle's part, he would simply say that having a certain number of fingers is not an essential property of being human (and so should not be factored into the definition). Yudkowsky is also wrong to say that the coordinate point (0,0,5) contains the same information as the HTML color blue. To the contrary, the coordinate point by itself contains no information; it can contain color information only when paired with some interpretation function I (in the case of HTML, the software provides this function). As for where else these ideas can be found, philosophers have been working on conceptual vagueness intensely since the mid-20th century, and cluster concepts were a relatively early innovation. The philosophical literature also has the benefit of being largely free of nebulous speculations about cognition and needless formalism (and the discussion of configuration space here is needless formalism, since Yudkowsky is drawing only qualitative conclusions and the practical constraints on constructing a configuration space even for robins alone are severe). The literature also uses terminology in the ordinary way familiar to everybody engaging these issues professionally (compare Yudkowsky's muddled understanding of intension) and avoids the invention of needless terms like "thingspace", which mainly achieve the isolation of LessWrong from the external literature (whose relative richness and rigor would doubtlessly benefit them far more than the Sequences, the works of a single, self-aggrandizing amateur). That's not to say that there's no good ideas in the article, only that it is unoriginal, muddled, imprecise, and parochial. 𝒮𝑒𝓇𝑒𝓃𝑒 talk 16:59, 17 February 2024 (UTC)
...I am going to be honest I didn't really understand anything you said lmao. It would be useful if you pointed out to some sources where those topics were mentioned instead of just gesturing to them vaguely as "the literature". You also didn't mention the whatever those concepts are called, so I am guessing what Yudkowsky does is just give really nice names to already known philosophical topics.𝗦𝗾𝗿𝘁-𝟭 talk stalk 10:40, 23 February 2024 (UTC)

Missing content on CFAR "cult"-like behaviour.[edit]

https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe

should be added𝗦𝗾𝗿𝘁-𝟭 talk stalk 10:48, 23 February 2024 (UTC)

Defense of Yudkowsky's quantum physics sequence[edit]

...from an actual MIT quantum computing guy https://physics.stackexchange.com/questions/23785/what-errors-would-one-learn-from-eliezer-yudkowskys-introduction-to-quantum-phy/24577#24577

and from a random lesswrong commentor who claims to have a phd in physics https://www.lesswrong.com/posts/x3Ckt4T2z4abt7ZKs/how-accurate-is-the-quantum-physics-sequence?commentId=r2ChCmWroXuJqSEBQ

"If you think you've learned quantum mechanics from it, you're a fool. If you think you've learned enough about quantum mechanics to be justified as a physical realist, you're correct."𝗦𝗾𝗿𝘁-𝟭 talk stalk 18:06, 23 February 2024 (UTC)