Talk:Roko's basilisk

From RationalWiki
Jump to: navigation, search
Icon internet.svg

This LessWrong related article has been awarded GOLD status for quality. Please keep this in mind when editing the article.

This article is of MID importance to the wiki.

See RationalWiki:Article rating for more information.

Goldenbrain.png
Information icon.svg Cover Story
This article is, among others, randomly included on the Main Page.
Please keep this in mind and be sure that your edits are of the quality that this implies.
Its front-page abstract can be found here and its editnotice here.
This page is automatically archived by Archivist
Archives for this talk page:  

/r/ControlProblem (a subreddit about AI control) is really worried[edit]

https://www.reddit.com/r/ControlProblem/comments/5hh9pq/gods_anxiety/ FuzzyCatPotato of the Tawdry Tapiocas (talk/stalk) 04:55, 10 December 2016 (UTC)

ohoho holy shit, back down the rabbit hole. Also, CFAR has finally admitted it's literally all about "AI risk" in the MIRI sense. I guess the attempts to paint it as a skeptical organisation met with a bit much ... skepticism - David Gerard (talk) 13:44, 10 December 2016 (UTC)

Is this Reddit post anything to worry about? Stuff like this gets me very worried! 70.50.8.19 (talk) 15:52, 22 January 2017 (UTC)

"Ignore acausal blackmail"; Possible after having predictably acted for fear of Acausal Blackmail?[edit]

I know that the winning strategy "make the [acausal blackmail] useless" is "by refusing any acausal deal involving negative incentives".

That being said, can you adopt this strategy AFTER already accepting an acausal deal involving negative incentives?

Say someone was scared after learning about the Basilisk and was informed that they must inform others about it to avoid punishment, and then did so. They have already shown that they would be susceptible to acausal blackmail and thus can they reliably adopt the strategy of refusing acausal blackmail AFTER THE FACT as a protective measure convincingly given that their track record has shown that they will accept acausal blackmail?

What if they had already known that the winning strategy was to ignore acausal blackmail but were caught off guard and carelessly and accidentally acted in accordance with the notion of "do x (with regards to the Basilisk) or else" but then realized their mistake and tried to rectify it? Does that count as rejecting and refusing to be influenced by acausal blackmail? January15 (talk) 16:38, 11 December 2016 (UTC)

At this point you should consider the entire rest of the suggested ways out of the Basilisk, including the stupendous unlikelihood of the entire concept - David Gerard (talk) 17:34, 11 December 2016 (UTC)
Do you think that at this point the hypothetical "insurance" of rejecting all attempts at acausal blackmail is no longer an option or do you think that I am thinking about the issue too much? Ignoring that outlet; Although I can admit that the concept of the Basilisk is somewhat unlikely I still feel that no outlet should be left untested and that absolute certainty is of the utmost importance, particularly one as sure as rejecting "acausal blackmail" despite potentially already having succumbed to it. (I'm not trying to be pedantic; I just find that the acasual blackmail rejection is the most convincingly effective so I'm particularly obsessed with ensuring I can be compliant despite the POSSIBILITY of previous "transgressions"/"infractions") January15 (talk) 17:43, 11 December 2016 (UTC)

Coming soon to popular culture[edit]

Charlie Stross is using this stuff in a novel, ETA 2018 - David Gerard (talk) 13:13, 24 December 2016 (UTC)

I don't understand at all this[edit]

Basically, the probability of a future AI who has never known me (mainly because it could appear much after I had left this mortal coil) recreating my self to torture it is pretty much nil (and, especially, how it could know about the ideas I'm having right now?). Even without the "So you're worrying about the Basilisk" it's of pretty common sense this does not work, or at least I see it that way. --Panzerfaust (talk) 22:17, 17 April 2017 (UTC)

Given that 99% + of the world's human population 'not being computer code developers and rearrrangers' (and almost 100% of other organic life - apart from occasional attacks by sharks etc) neither contribute to nor subtract from the advancement of computer sentience' what is the point of the Basilisk?
See some of the comments in the talk page archives. (But - the captcha question is 'nut case' so something might be trying to communicate on the subject) 31.51.114.49 (talk) 09:05, 18 April 2017 (UTC)
Yeah, it doesn't make sense. I think a pre-requisite of fearing the Basilisk in the first place is faithfully believing something along the lines of "THE SINGULARITY IS NEAR!!1". Reverend Black Percy (talk) 10:48, 18 April 2017 (UTC)

What if[edit]

... the original posting #was# the basilisk projecting itself... so the more benign version is created? 82.44.143.26 (talk) 16:34, 18 April 2017 (UTC)