Talk:LessWrong/Archive14

From RationalWiki
Jump to navigation Jump to search

This is an archive page, last updated 3 May 2016. Please do not make edits to this page.
Archives for this talk page: , (new)(back)

Does Yvain believe in cryonics?[edit]

Been trying to find this out, this is the closest thing. Anyone here knows?--Baloney Detection (talk) 18:59, 9 December 2012 (UTC)

More pressingly; "Does anyone on RationalWiki care if Yvain believes in cryonics?" Scarlet A.pngd hominemModerator 19:04, 9 December 2012 (UTC)
I want to see if he really is such a bright guy as has been claimed.--Baloney Detection (talk) 19:14, 9 December 2012 (UTC)
The best way to find out is probably to ask him.--ADtalkModerator 22:34, 9 December 2012 (UTC)
There are plenty of people I know who don't believe in cryonics, but I wouldn't consider especially "bright". Scarlet A.pngbominationModerator 22:43, 9 December 2012 (UTC)
That kind of reasoning would be affirming the consequent. You're arguing that the following argument is invalid:
If X is smart, then X would not believe in cryonics.
X does not believe in cryonics.
Therefore, X is smart.
But this argument is invalid, because it's a fallacy. What BD is asserting is that if Yvain does believe in cryonics, it would cast doubt on his intelligence. - LucidFox (talk) 14:22, 10 December 2012 (UTC)
I am aware of that. I was just trying to backhand him. Scarlet A.pngssholeModerator 15:58, 10 December 2012 (UTC)
Like most deductive fallacies, that's a valid inductive argument. If 10 out of 100 smart people believe in cryonics, and 90 out of 100 stupid people believe in cryonics (and that's everyone), then 'all people who believe in cryonics' will include 10 smart people and 90 stupid people, so a random cryonics believer would have a 90% chance of being stupid, which is much higher than the chance a random person would be stupid (50%). Gwern (talk) 23:23, 31 December 2012 (UTC)
I'm doing some work for max planck institute of neurobiology now actually, I don't think any of the neuroscientists that I develop software for are signed up for cryonics lol. As of now, cryonics is something for religious folks that have a grudge against some superfluous aspect of religion. Dmytry (talk) 09:45, 10 December 2012 (UTC)
I've written up a sort-of-answer to this question for you guys here - Yvain

Roko's Basilisk[edit]

I read this article for the first time yesterday. The Roko's Basilisk bit is hilarious. Lots of deliciously silly religious thinking there. SophieWilder 09:13, 28 December 2012 (UTC)

The plan for a singleton friendly AI is, literally, a plan to build our own benevolent local God. It never seems to be discussed in these terms at LW, for some reason - David Gerard (talk) 09:23, 28 December 2012 (UTC)
For your intransigence, you two are thoroughly doomed by our future overlord. I didn't use the term "doom" lightly. Doom the video game was likely a backwardly-caused illustration of the perils we might face. There may possibly be a more-than-insignificant probability that my analysis is largely correct.--Brendiggg (talk) 09:53, 28 December 2012 (UTC)
You've doomed yourself by not oversighting this section. SophieWilder
Maybe it's an IRL doomsday cult with an online board dominated by outsiders? That seems to fit very well. edit: i.e. basilisk is one of those things not to be discussed with outsiders, who debunk it. Dmytry (talk) 11:09, 28 December 2012 (UTC)

Escaping the torture by eldritch abominations howto... http://lesswrong.com/lw/fq3/open_thread_december_115_2012/80v8#20121230img . tl;dr; if you are doomed, blow up your head. Dmytry (talk) 23:38, 29 December 2012 (UTC)

Just added capture tags to that one, before the mods notice and delete the thread - David Gerard (talk) 11:28, 30 December 2012 (UTC)
The basilisk is now a truly mind-blowing concept. Dmytry (talk) 20:12, 30 December 2012 (UTC)
*Ba-dum tish* Star of David.png Radioactive afikomen Please ignore all my awful pre-2014 comments. 21:02, 30 December 2012 (UTC)

LW 2012 survey results[edit]

Here. Kinda fascinating. Apparently four people who are active enough to take the survey got referred there by RW. I'm surprised by how (relatively) many of the survey taker comments that were negative of LW.--Baloney Detection (talk) 10:22, 2 December 2012 (UTC)

Avg. iq of 125 on iqtest.dk is kind of interesting. I was under impression that ~120 on iqtest.dk , diligently taken, correspons to IQ of 100 , the online test inflating the IQ scores (but I may be thinking of other test). The null hypothesis would be that they have average IQ of 100. The self reported IQ is quite meaningless, as there's multiple very different measures known as IQ. edit: got to give them this one though, iqtest.dk is not the test i was thinking of. Dmytry (talk) 21:21, 2 December 2012 (UTC)
Self-reported IQs are meaningless, there is no way they can be verified (and there is dispute if IQ is actually meaningful). I think it's that high because it's part of their self-images that they are so rational and super-smart, and of course you can't have a low IQ then. So those who score badly on IQ tests may simply abstain from answering the question or just make it up as they go along.--Baloney Detection (talk) 21:24, 4 December 2012 (UTC)
If you ever took iqtest.dk, you'd know there is no way that 120 on it corresponds to a normal 100; I'm not sure what 120 corresponds to on it, but whatever it is, it's an underestimate and not an overestimate. You are thinking of some other test. --Gwern (talk) 23:17, 31 December 2012 (UTC)
Ya, I said it wasn't the test I was thinking of. I don't think you can conclude it is a lowball or highball. The tests do not correlate perfectly, especially at the far ends of the range. They measure slightly different aspects of cognitive ability. Subsequently, someone with high IQ at iqtest.dk (or Raven's matrices in general) would have an iq closer to the mean on other types of tests, and someone with high IQ on other types of the test would have IQ closer to the mean on pure Raven's matrices. Dmytry (talk) 18:27, 2 January 2013 (UTC)

Any idea what's going on here?[edit]

http://lesswrong.com/r/discussion/lw/g24/new_censorship_against_hypothetical_violence/ Loafinglurker (talk) 23:13, 25 December 2012 (UTC)

I wouldn't be terribly surprised if someone had started a thread saying "let's kidnap the head of DARPA and force him to fund a benevolent AI" or somesuch. I mean, once you get people who take Roko's basilisk seriously, literally any crime becomes hypothetically meritorious if it helps the cause.
Either threads like that were started and they made Yudkowsky nervous, or he's just fretting in general about LessWrong's reputation and settled on the most obvious response, which is "ban discussions about criminal activity." (I hope it's the former, because the latter implies the disappointing notion that Yudkowsky is smart enough to know that LessWrong has critics, but not smart enough to realize its critics mostly say that once you get past all the rationalism writings, the site is dominated by Singularitarian crackpots, transhumanist hucksters, and ignoramuses who know nothing about how AI really works.) Star of David.png Radioactive afikomen Please ignore all my awful pre-2014 comments. 00:24, 26 December 2012 (UTC)
Nah, someone started a thread with question if it would be ethically meritorious to off tobacco executives, as tobacco is a product that kills most people. The Yudkowsky chimed in, basically saying that a: makes them look bad and b: if the crime is bad it's bad and if the crime is good talking about it on the effing internet is still bad. Both valid points but something feels amiss. Then arguing that if he says some AI would kill everyone, and has a zillion and one trolley problem discussions (not to mention the pre-existing notion of self defence), he's not conceptually promoting violence, http://lesswrong.com/r/discussion/lw/g24/new_censorship_against_hypothetical_violence/84qx . I guess if he speculates in an airport that someone's cellphone may be trigger for a bomb, and this person gets tackled face into the ground, he thinks he's in the clear because he didn't conceptually advocate violence, and can get out of it by pointing at how he deletes messages advocating bomb threat -> violence, unlike those bad guys. Something else: apparently they had someone who talked about getting life insurance with singularity institute as beneficiary (edit: correction, 'best charity' not SI), and killing themselves http://lesswrong.com/r/discussion/lw/g24/new_censorship_against_hypothetical_violence/84cc , and someone actually almost got inspired but got tentatively talked out of that. I can't find that any more, I guess Yudkowsky deleted it. edit: added sources. Dmytry (talk) 07:40, 26 December 2012 (UTC)
Hm. I was wrong on both counts, then. Star of David.png Radioactive afikomen Please ignore all my awful pre-2014 comments. 07:53, 26 December 2012 (UTC)
Who knows what the tobacco executives were a stand-in for, though. Look at this http://www.gwern.net/Terrorism%20is%20not%20Effective#competent-murders and search for Goldman, this guy used Goldman-Sachs as illustration; in the other article the same guy discussed ending moore's law and concluded that sabotaging the fabs is ineffective. Either I am paranoid, or a highly anonymous online person that discusses destruction of Goldman Sachs by assassinations, and discusses stopping Moore's law by sabotaging fabs, just about *might* have in mind some permutation of the two. edit: to clarify, that's link to website of one of top contributors at LW. Dmytry (talk) 08:07, 26 December 2012 (UTC)
I've pointed out your idiocy and harping on this point many times, Dmytry (and I'm not surprised to see you pushing your stuff here as well, now that most of your sockpuppets are blocked), but I have to confess - you got me! All my essays are just preparation for my sinister campaign to destroy Goldman Sachs using Moore's law! If you won't tell anyone, I'll cut you in on the proceeds, and I am prepared to be very generous. --Gwern (talk) 23:18, 31 December 2012 (UTC)
Ahh, that's what you have in mind! My sincerest apologies: I was just about worried you had in mind stopping the Moore's law by methods that you speculated would work. By the way, just as SI researchers do not support one weird understanding of Leslie's firing squad example, Yudkowsky doesn't support such hypothetical violence. Dmytry (talk) 09:48, 1 January 2013 (UTC)
LW has often been described as cult-like, but that suicide notion pushes it firmly into the realm of the most dangerous cults. Presumably, the individual contemplating this believed they would get extra privileges when they are inevitably and electronically resurrected. Also, presumably, they were convinced that being sincere in their intent was sufficient for their eternal reward.--Brendiggg (talk) 09:13, 26 December 2012 (UTC)
No idea if resurrection was the plan, I'd like to say no but then I recall the basilisk which worked by resurrection. All I know is that when they were talking of http://lesswrong.com/lw/2r/really_extreme_altruism/ , someone mentioned a person who contemplated suicide on such grounds, but not the original hypothetical. This is all I know. Yudkowsky couldn't have been more explicit with how he wants to respond when someone connects some dots - look, unlike this bad person here, we have a policy to delete posts that connect the dots. Note, by the way, that in practice, whenever they discuss some violence, the response is overwhelmingly good reasons not to do said violence. This will no longer occur. edit: To summarize, for better or for worse, the acolytes are not to discuss in public the violent consequences of the ideology and the beliefs which are being promoted. They are still gathering together in real life, but that's with fewer outsiders. Dmytry (talk) 10:23, 26 December 2012 (UTC)
Absolutely. The extreme actions being bandied (suicide and assassination) to further the cause are Basilisk-related and if the dots haven't been connected by now for any LWer, they never will.--Brendiggg (talk) 10:50, 26 December 2012 (UTC)
I have no idea if it's basilisk related, tbh, or just doomsday related, the basilisk is a good example of crazy though. I don't think they expect Tiplerian resurrection, at least, those into cryonics don't? Dmytry (talk) 11:56, 26 December 2012 (UTC)
Also, pedantic correction, the talk was about donating to "best charity", not specifically SI. Dmytry (talk) 13:22, 26 December 2012 (UTC)
Not so pedantic. I picked up on that but promptly forgot. Reading comprehension gets a little compromised during the festive season.--Brendiggg (talk) 13:37, 26 December 2012 (UTC)
I almost feel bad for Yudkowsky, it's not like he's driving people crazy as much as he's being a crazy magnet.

The complicated details of the basilisk affair[edit]

Above, in the "enclave" discussion, Yvain makes the worthy observation that "common sense" protects most people from anything like fear of the basilisk. But his account of the basilisk episode overlooks some of the basic details, which can be verified by looking at the recent "basilisk 101" post on Alexander Kruel's blog.

First, Roko was not just trolling LW. The basilisk was introduced as one ingredient in a complex confection of ideas. Synopsis: People who work on existential risk mitigation get socially punished for spending time on this strange but worthy cause. How can we get around this? Meanwhile, in some future Everett branches there might be a Friendly AI that will punish people who knew about x-risk / singularity and didn't donate all their money. So today's oblivious masses will get to experience a happy post-singularity eternity, while the minority of people who *were* x-risk-conscious but who slacked off in some way, will be punished by the Robot God (to use Dale Carrico's term). Wouldn't that be unfair! Fortunately, there is a way out. Just spend a relatively small sum (like a few $10,000) on stock-market bets that are very unlikely to succeed but with a huge payoff if they do, with the precommitment that if you do win, you will spend all the proceeds on FAI / x-risk mitigation / your favorite futurist charity. Happily, this deals with both problems at once - you get to be a socially normal person, who fulfils their need to save the future with a quietly eccentric investment strategy; and perhaps you can also avoid punishment by the Robot God, because you've done the right thing: in the Everett branches where you *do* get rich, you go on to be a great x-risk philanthropist.

Then, Eliezer pounced on the ingredient of "acausal blackmail by a future AI" in this scenario. In the comment announcing that he would "ban" (censor, hide from public view) the post, he gave two reasons: first, don't give people nightmares by introducing scary ideas; two, don't give distant superintelligences an opportunity to acausally blackmail you by considering whether to make a deal with them. This is the part of the scenario that may be most difficult for the uninitiated to understand. The idea is *not* just that you might be punished by a future AI, as if it were just another vengeful deity. The idea is that you and the future AI are engaged in an "acausal interaction", in which you do not communicate directly but you still know about each other through some sort of extrapolation; and the AI acausally communicates to you the threat of being punished unless you do your part to improve the future.

My core criticism of how the basilisk affair was handled, has always been that this "fear" or "scenario" is based on a fallacy, which would be exposed as such if only the powers-that-be at LW permitted public discussion of what that scenario requires. The neurosis among LW's moderators about this topic, derives from their conceptual investment in the idea that such acausal interactions are possible; an acausal game-theoretic equilibrium between yourself and "Omega" is Eliezer's answer to Newcomb's paradox, a thought experiment which is a perennial topic of discussion on LW. But if you think that acausal trade is possible, then acausal threats are also possible.

Ultimately basilisk censorship is another expression of the same attitude of "extreme caution in the face of existential risk", which also underlies the categorical fear of UFAI and (dating from 2008) the LW debate over whether the LHC should be allowed to operate. The argument is that we don't yet understand acausal interactions, and therefore, so long as it's even *possible* that acausal blackmail could occur, you should not go around encouraging people to enter into such a relationship - which was Eliezer's interpretation of "Roko's Wager". Mporter (talk) 01:21, 13 January 2013 (UTC)

@Mporter You always keep quiet about the fact that Roko wanted to trade with cooperating unfriendly AI's. A quote from Roko's original post:

You can also use resources to acausally trade2 with all possible unfriendly AIs3 that might be built, exchanging resources in branches where you succeed for the uFAI sparing your life and "pensioning you off" with a tiny proportion of the universe in branches where it is built. Given that unfriendly AI is said by many experts to be the most likely outcome of humanity's experiment with AI this century, having such a lifeboat is no small benefit. Even if you are not an acausal decision-maker and therefore place no value on rescue simulations, many uFAIs would be acausal decision-makers.

This idea is in part due to Rolf Nelson's idea of using the simulation hypothesis to acausally trade with uFAIs.

XiXiDu (talk) 10:21, 18 January 2013 (UTC)
"You always keep quiet about the fact that Roko wanted to trade with cooperating unfriendly AI's." I actually hadn't noticed that part! Mporter (talk) 02:50, 20 January 2013 (UTC)
I expect one of the reasons for Yudkowsky's reluctance to discuss the issue to be that, given his framework of beliefs, it would actually be rational to trade with uFAI's. As Roko wrote in the comments, P(uFAI) > P(FAI). Therefore all possible uFAI's can acausally cooperate against FAI to outweigh any incentive given by FAI. XiXiDu (talk) 10:21, 20 January 2013 (UTC)
Sounds like Pascal's Wager gone nuts. Peter Subsisting on honey 01:30, 13 January 2013 (UTC)
A human may be more effectively bribed with paradise than blackmailed with hell, anyway (why hell?). The basilisk absolutely relies on a fallacy, yes. But a so called "friendly" AI that would implement TDT, definitely can be blackmailed by unfriendly AI from some parallel world. TDT is Yudkowsky's baby, as is the monumentally stupid idea of using TDT in a friendly AI. To add more injury to insult, Yudkowsky wrote pages and pages about how other AI researchers with their shallow insights are beneath him, and how they suck. (http://lesswrong.com/lw/uc/aboveaverage_ai_scientists/ but i seen some even worse). Dmytry (talk) 11:34, 13 January 2013 (UTC)
I wish I had the background to ascertain whether or not Yudkowsky's ideas on reverse causality had any merit. Inasmuch as I can tell, it seems to be an awkward semantic game that posits that acting on an anticipated future decision by a second party means that the second party "caused" your action. Indeed, Mporter, the section of your comment here does not seem to contradict the comparison to a judgmental future AI-deity, though you seem to have meant it as such, but just restated a simple idea with new terminology. I realize that in the future, all-knowing God will punish me if I do not evangelize, and so I must act now to justify myself to him rather than sinning by omission. It's an old idea - James 4:17 says, "So whoever knows the right thing to do and fails to do it, for him it is sin." And according to legend Jesus cursed a barren fig tree (to punish it for taking up the ground while failing to produce fruit).
Maybe I'm just betraying my ignorance, but the Roko thing seems identical to the idea of the fig tree. Jesus drew an implicit comparison between his followers and the tree, pointing out to them in the lesson about how the fruitfulness of their earthly time will be judged. Modern Christians do not know about God directly (generally speaking) but know of him through a sort of "extrapolation" (tradition, culture, holy books, preachers, etc.) Some few can do the serious "math" of reasoning out God's existence (i.e. prayerful revelation). But mostly they reason that it is likely that God exists and that he will judge them, and they can guess certain aspects of this judgment with uncertain but (in their view) nonzero success. It seems the same to me.--ADtalkModerator 12:20, 13 January 2013 (UTC)
I may have it wrong (because 1. it's unfinished and inchoate 2. it's ridiculous rubbish that quickly descends into stupid results, and rather than take the stupid results as the reductio they are they trumpet them as significant because they're stupid), but insofar as I understand it acausal trade in TDT requires the entities to have good-enough simulations of each other. So the premise requires that a piddling ape brain can simulate a hypothetical future superintelligent being in any meaningful way. So in practice, this quickly becomes imagining what an effectively godlike being could do, and, in the manner of humans throughout history, getting an answer by projecting one's apelike fears onto the imagined sketch of an entity - David Gerard (talk) 13:30, 13 January 2013 (UTC)
I think for TDT specifically the bar on good enough simulation is very low. Regardless, this still leaves the rather funny question as of what TDT (or UDT)-based super-intelligent AI may do about acausal threats from other AIs. It is rather fascinating to see that people afraid of AI doomsday actually work on an insane AI. Ohh and by the way: if basilisk worked, then exposing basilisk to people that reason using CDT (but do understand the logic of TDT) should make Basilisk threat go away, because CDT works like "if TDT is evil then don't build TDT", and in this context TDT has not to be evil to increase likelihood of it being built. Dmytry (talk) 14:17, 13 January 2013 (UTC)

Something rather funny that I realized after reading their UDT paper (supposedly a formalization of TDT): From outside perspective, nobody gets tortured. The people that would be tortured if they didn't donate, are a (possibly empty) subset of those poor sods that did donate all they had because of the basilisk. Whereas the people that wouldn't donate even if they were to be tortured, are not instances of "if UDT will torture then donate" and do not acausally motivate UDT agent. Had they not banned all discussion they might have gotten from their old testament shit to all forgiving God. Dmytry (talk) 14:48, 20 January 2013 (UTC)