Talk:LessWrong/Archive20

From RationalWiki
Jump to navigation Jump to search

This is an archive page, last updated 4 July 2024. Please do not make edits to this page.
Archives for this talk page: , (new)(back)

Some positive words[edit]

I thought it is time for some positive words. This exchange, between Eliezer Yudkowsky and Carl Shulman (a MIRI research fellow), caused me to update positively about MIRI. It shows that healthy disagreement is possible, even if it does pertain Eliezer Yudkowsky.

That they changed their name from 'Singularity Institute' to 'Machine Intelligence Research Institute' is another very positive sign. And of course that they finally published some technical results, taken note of by people such as John Baez, is even more important.

That there is now a concise and referenced overview of MIRI's beliefs is also valuable. It remains to be seen if they continue to output progress reports and generally valuable content such as their Decision Theory FAQ.

(I was too lazy to write a blog post so I thought I'd just dump it here :-) XiXiDu (talk) 12:04, 3 August 2013 (UTC)

"I'm saying I don't know how to estimate heroic probabilities. I do not know any evenhanded rules which assign 'you can't do that' probability to humanity's survival which would not, in the hands of the same people thinking the same way, rule out Google or Apple, and maybe those happened to other people, but the same rules would also say that I couldn't do the 3-5 other lesser "impossibilities" I've done so far. Sure, those were much easier "impossibilities" but the point is that the sort of people who think you can't build Friendly AI because I don't have a good-enough hero license to something so high-status or because heroic epistemology allegedly doesn't work in real life, would also claim all those other things couldn't happen in real life, if asked without benefit of advance knowledge to predict the fate of Steve Wozniak or me personally; that's what happens when you play the role of "realism"." This guy can't even write coherent English. ħumanUser talk:Human 06:07, 6 August 2013 (UTC)
He's writing fluent narcissist there :-) Star of David.png Radioactive afikomen Please ignore all my awful pre-2014 comments. 06:43, 6 August 2013 (UTC)
I'd agree with that estimation. ħumanUser talk:Human 06:49, 6 August 2013 (UTC)
Your lesser minds have a mechanism to attack anyone who you think he doesn't have good enough hero license to write such high status texts. Or something. Dmytry (talk) 18:43, 29 August 2013 (UTC)
So, Alex, when are you going to abandon your agnostic ambivalence and become the Lesswrong atheist you know yourself to be? As long as Yudkowsky's in charge, the changes to which you refer are just window dressing. You'll notice he doesn't let Luke Muehlhauser or Michael Anissimov take the big interview with Carl Shulman. It's always about Big Yud. Never forget that.User:tuttlemsm 6:05, 3 August 2013 (CST)
Here is my reply. Consider eMailing me at ak@xixidu.net for more questions. XiXiDu (talk) 13:11, 4 August 2013 (UTC)
I do not have "sufficient privileges" to post a reply over at spacebattles; indeed, the thread does not even appear to me unless I search for it. So I'll say here what I would've said there. I think that criticism of LW conflates what are really two separate issues: (1) it's a cult and (2) their beliefs are nutty. One can object to LW on the basis of one without the other. There are those who object to it as a cult, but think transhumanism itself is a perfectly fine idea, and therefore they agree with proposition (1) but not with proposition (2). I would put Richard Loosemore and yourself in this camp. There are those who don't think it's a cult--- who will say things like "be careful throwing that word around"--- but who are not on board with transhumanism as an idea. I can't honestly name anybody who maintains this form of objection to LW offhand, but I have seen it on various websites (like, say, spacebattles or BO). Then there are those who object on both fronts like Dale Carrico and, for the most part, myself. I would have assumed at first that anyone objecting to LW made for natural allies, but I have since learned this is not the case. I'm facebook friends with Loosemore, and he cautioned me on my song satirizing Michael Anissimov's article "Six Places to Nuke When You're Serious," pointing out that poor Michael only intended the article to be a wake-up call (to which I replied, bullshit, you can't tell me he wasn't having fun writing that article; the proposition that any modern-day terrorist could acquire exactly six loose nukes and take out the six places he cites is absurd; and there is a double standard, because roko was drummed out for giving future evil robots bad ideas--- but because Michael waxes Big Yud's car, he gets a free pass for providing future robots with very bad ideas). So I suspect that you object to the cultish aspects of the LW culture, but you still would like people to take the core ideas of transhumanism seriously. And that's why I don't write songs about you or Loosemore. You're serious people trying to pursue the topic legitimately, through bona fide academia, and I respect that. I still have deep skepticism of transhumanism writ large, but I could be persuaded by *legitimate publications in peer-reviewed journals*. Loosemore has indeed pointed me to some of his own work, and I find it very interesting. But I do think the LW *take* on transhumanism is nutty. I mean, BS-crazy. It's ironic, because as much as we're all united here in general by atheism, you have the same problem as most religions--- that the fundamentalist wackos (that would be Yudkowsky and LW) get more attention than the relatively unoffending moderates. I'm not sure what you can do it about. Maybe I'll write a song in praise of moderate transhumanism pursued in legitimate academia. All this said, I think it's a tremendous mistake to offer olive branches to the crazies, or to think that you can direct them to behave better by praising the few things they do well. They're not going to change because. They. Are. A. Cult. And cults don't change. tuttlemsm 21:51, 4 August 2013 (UTC)
Also, if you are inclined to post this comment on spacebattles on my behalf, feel free. I am frustrated that I am being discussed there but I cannot comment on my own behalf. I am Robert Gross, obviously. tuttlemsm 21:59, 4 August 2013 (UTC)
What do you believe Transhumanism to stand for? Do you disagree that technology can and should be used to improve the human condition?
Anyway, being a transhumanist does not imply that you call people lousy parents if they do not sign up their kids for cryonics, as Yudkowsky does here (Make a video about that! More craziness here). Being a transhumanist does not imply that you should be concerned about a simulation shutdown, as Luke Muehlhauser suggests. Being a transhumanist does not imply that you worry that future superintelligences will punish you if you do not help to create them.
I am all for improving the human condition by means of technology. I believe that life extension will be possible and should be researched. Research on how to stop aging? Great. Research on cryonics? Great. Telling people that they are lousy parents for not signing up their kids for cryonics because there will soon be an intelligence trillions of times smarter that will fix all damages? Crazy!
And by the way, that Michael Anissimov gave future evil robots bad ideas won't be received too well. That is not how Roko's basilisk works. And if such a superintelligence is possible then it does not require humans to tell it how to best destroy civilization. Those people do not take such mistakes lightly, everyone has to know how their intricate scenarios work or they will just shrug you off.
P.S. Check out this thread. XiXiDu (talk) 08:39, 5 August 2013 (UTC)
Simulation shutdown lol, I didn't see that one. Now that's a great example of translating fundamentalist Christianity into nerdier terminology. Dmytry (talk) 12:04, 5 August 2013 (UTC)
Here is more:

Risks from synthetic biology and simulation shutdown look like they might knock out scientific advancement before we create an AI singularity.

and

Due to updates about simulation shutdown risk and the difficulty of FAI philosophy (I think it’s easier than I used to believe, though still very hard), I think an FAI team is a better idea than I thought four months ago.

Link to the first quote and the second quote. XiXiDu (talk) 12:12, 5 August 2013 (UTC)
A few points from me... first off excessive renaming doesn't correlate with anything good so it's not a good sign let alone a very good one.
Secondarily with regards to videos, thing is without watching one just thinks of it as totally wacko thing to do to make such videos, though the videos seem to not be what one imagines (much better humoured).
With regards to categorization of critique: there's an entire unholy cluster there which also includes pseudoscience, racism, sexism, Randian objectivism, and grotesque, grandiose case of paranoid exceptionalism (that whole thing with friendly AI by Yudkowsky vs unfriendly AI by people with actual talent). All of that proclaimed rational and rationalized often using technobabble. So there's a lot of different ways to dislike it. Dmytry (talk) 08:01, 5 August 2013 (UTC)
I think I've said that I'm in favor of "transhumanism" if it is achieved through legitimate, mainstream research institutions. I'm not challenging the legitimacy of what you do, or what you think you're trying to do, but--- and I imagine you've heard this before--- what is "technology"? What is "improvement"? What is "the human condition"? It just seems like a lot of transhumanist pursuits--- even the legitimate ones--- are undertaken with some core philosophical assumptions that remain insufficiently examined. I wear eyeglasses, but I don't want half my brain replaced with a hard drive. I'm sure there's a vast gulf between eyeglasses and cybernetic brains, and I am not comfortable articulating where I would draw the line within that vast gulf. I have to admit that Mr. Carrico's charges resonate with me that so much of even the legitimate avenues of research amounts to so much wishful thinking about death denial. Are you sure that people living to be 200 years old is inherently a good thing? How would that effect the global ecosystem? What if you really could conquer death? How would that affect "the human condition"? Would it be recognizable? See, I'm just a dumb composer. I'm not a scientist. But that's why, with all due respect, you guys in science need us guys in the humanities to ask these questions. I'd sooner leave the resolution of these questions up to a poet than up to the Eliezer Yudkowskys of the world.
Secondly, I get how Roko's Basilisk works. And it's profoundly silly. I'm sorry, but I'm not going to waste my time arguing the minutiae of "intricate" scenarios if the totality of all the intricacy is pure, unmitigated bullshit. The bottom line is that Roko risked giving some bad future robots some bad ideas on how to put the hurt on humanity. My point is, every human-made, published bad plan on how to put the hurt on humanity is going to be available to these hypothetical future malignant AIs. See, what really differentiates Roko from Michael A. is *not* that Roko's idea entails some impressive degree of anachronistic acausality or other (nonsensical, arcanely theoretical) "intricacy"; no--- what differentiates Roko from Michael A. is that Michael A. is a sycophantic Eliezer Yudkowsky cheerleader who berates any criticism of Big Yud as so much jealousy, singing Big Yud's praises to the cyberheavens--- and Roko isn't. (In fact, I imagine that Roko's capacity to imagine his basilisk scenario, ingenious as it is as a tidy work of science fiction, was received as an intellectual threat to Yudkowsky, so Yudkowsky had to show a potential intellectual rival the door.) You see "intricacies" and I see things that are really, really that simple. In my song "The Ballad of Big Yud," I summarize the entire fiasco succinctly: "time runs backwards according to Big Yud." And that's all the respect his intellectual masturbation about acausality deserves.
See, Alex, what I think you're not getting is that I'm not playing their game. I'm playing *my* game. And my game is played on a guitar. Or, at least guitar samples from a MIDI box. And it's a game they never anticipated, not even for all the anticipatory magic that is Bayes's Theorem. I'm not going to go tete-a-tete with these people in one of their circle jerks. It's a waste of time. If there's some place where I slip up and get some arcane point about Roko's basilisk wrong and they pounce on it, who cares? I'm not trying to convince *them*. I'm trying to convince the uninitiated--- the people who are considering joining them, as I once did--- how silly they are, and possibly how dangerous they are, and to stay away from them. I'm not going to waste time on the endless minutiae; I'm zeroing in on the essential essence, and the essential essence is that they're an incestuous, mutually reinforcing ridiculous stinking turd of a cult. And I'm betting that their mountains and mountains of bullshitic verbosity can't compete with my patented half-diminished seventh chord.
Thirdly, maybe I am wacko for making these videos. I don't deny it. Or maybe I'm just really pissed off that rather than living in the best of all possible worlds, I instead live in this one, where billionaires can waste their money on such obvious nonsense while worthwhile causes scrap in the streets every day for breadcrumbs. But I do agree, there are a lot of ways to dislike those guys. tuttlemsm 12:44, 6 August 2013 (UTC)
First of all, I mostly use the transhumanism label for signaling purposes these days. When I was younger, I added it to my homepage for some reason and never bothered to remove it. Secondly, I am not an academic, far from it. I am just a 104 IQ guy (my IQ was once measured to be far lower than that) submitting brain dumps to the Intertubes. I have no education worth mentioning (although I am trying to change that in recent years). I worked as a street builder, baker and now sometimes as a gardener.
One day I came across Less Wrong (named Overcoming Bias back then). Some years later the Roko incident happened, leaving me with the impression that those people are batshit insane. And here I am, laughing my ass off about those supposedly high IQ, narcissistic and psychotic world saviors who appear to be only a little above my level, which is arguably laughably low, compared to someone who I expect to be able to save the world.
Regarding Roko's basilisk. Very loosely, the idea is that future super-robots will punish you if you try to create them out of a fear that they will punish you if you don't. What Roko did is making people aware of this possibility. Others then reasoned that even a low probability of them being influenced by that threat could make it worthwhile for the future super-robots to blackmail them to do whatever future super-robots might want them to do. It is a self-fulfilling prophecy, if you are dumb enough to act on it. Speculating about how to destroy the world with nukes is in no relevant respect similar.
And as you are playing your game, I play my game. There is not much that bothers those people more than treating them with respect, because then they can't readily ignore you, or call you a troll. They actually have to reflect on their fantasies a little bit, which they normally don't have to do because they surrounded themselves with likeminded fanboys.
Anyway, I love your videos. I just wish you'd try to be a little bit more accurate. And no, you are not a wacko for making those videos. Satire of public figures (they want to take over the world after all) is completely normal. XiXiDu (talk) 08:43, 6 August 2013 (UTC)

By the way, my latest youtube opus can be searched for by name: "Heliocentrism (A Musical Portrait of Eliezer Yudkowsky And His Orbit)," and it is no joke. tuttlemsm 1:21, 6 August 2013 (UTC)

Creepy! I posted all your videos and their descriptions here. I will add new videos to that page as they come. XiXiDu (talk) 09:30, 6 August 2013 (UTC)

Doing something for people distressed by the basilisk[edit]

moved to the shiny new Talk:Roko's basilisk Requesting thread archival (why?) Plutocow (talk)

Of course the LessWrong crowd is pro-gamergate[edit]

Source.204.11.142.106 (talk) 15:52, 7 June 2016 (UTC)

which is the correct way to be. — Unsigned, by: 70.198.56.162 / talk