Talk:Roko's basilisk

From RationalWiki
Jump to: navigation, search
Icon internet.svg

This LessWrong related article has been awarded GOLD status for quality. Please keep this in mind when editing the article. See RationalWiki:Article rating for more information.

Goldenbrain.png
Information icon.svg Cover Story
This article is, among others, randomly included on the Main Page.
Please keep this in mind and be sure that your edits are of the quality that this implies.
Its front-page abstract can be found here and its editnotice here.
This page is automatically archived by Archiver
Archives for this talk page:  

There is an escape from the basilisk[edit]

But revealing any details of it would automatically make it void. It must exist solely in one's mind for it to work. — Unsigned, by: 179.217.61.233 / talk / contribs 14:06, 5 January 2018 (UTC)

So, only you and the basilisk know about it then? Ariel31459 (talk) 23:37, 11 January 2018 (UTC)
does it involve John Conner smashing Skynet? AMassiveGay (talk) 23:50, 11 January 2018 (UTC)

Alternative escapes[edit]

Have a WindowsPostBasilisk upgrade (and Mac etc equivalents)to hand that is totally incompatible with whatever operating system the Basilisk uses; and/or direct it to dealing with the generators of 'dubious and creative emails and inappropriate coding' (who are, after all, preventing people from creating the baslisk), and/or redirect all such material at the basilisk. Anna Livia (talk) 17:02, 23 January 2018 (UTC)

Reversal of this thought experiment[edit]

Hi! I found this page because I was concerned about the possibility of a mad scientist (human) that would create an AI capable of suffering only to eternally torment it. Somewhat like the sick people that torture animals except in this instance the the AI would be unable to escape via death. Are there any safeguards to prevent such a thing from occurring? — Unsigned, by: 107.77.229.114 / talk

We currently lack the capability to create an AI that is capable of suffering. If we do gain this capability before we go extinct, I doubt it will become widespread enough for some rogue “scientist” to make one, there’s just not a need for them. Christopher (talk) 11:31, 2 March 2018 (UTC)
What was the creature in Hitchhikers' that was bred to want to be eaten? Anna Livia (talk) 13:07, 2 March 2018 (UTC)
You don’t think someone could replicate the intelligence of a puppy? Sony wouldn’t want to make Aibo 2035 capable of feeling mild sadness/despair when it’s owner goes to work? Seems we would be able to do this early on. This scenario seems probable to me. Frankly I’m counting down to the day when all meat is grown in a lab in order to reduce the significant suffering that occurs on factory farms so Aibo 2035 scares me.— Unsigned, by: 68.3.72.232 / talk / contribs
On talk pages, please sign your comments using four tildes (~~~~) or by clicking on the sign button: SigButt.png on the toolbar above the edit panel. You can also indent successive talk page comments using one more colon (:) for each line. Thank you. CowHouse (talk) 04:54, 3 March 2018 (UTC)

about ignoring negative incentives[edit]

i have a question about the 'ignore negative incentives' argument: what if a person reads about the proposition and gets scared so decides to 'cooperate' just in case, by spreading the word or something like that, therefore letting the negative incentive influence his/her actions, but then, after doing that, reads about the 'ignore blackmail' argument and realizes what really should have done. Can that person still ignore the negative incentives, even after doing what he/she did? or there is another strategy for cases like that?. Alexander Kruel talks about this in one of his articles, saying that human decision is time-inconsistent and changes with the person's beliefs. — Unsigned, by: Alayne95 / talk / contribs

@Alayne95 can somebody please answer this question? thanks in advance. Alayne95 (talk) 18:21, 28 March 2018 (UTC)

Alexa[edit]

Where does the 'laughing Alexa' fit into the RB meme? Anna Livia (talk) 18:46, 12 March 2018 (UTC)

@Anna Livia Eliezer Yudkowsky actually tweeted something about it, i don't know if he was kidding or truly concerned. Alayne95 (talk) 02:43, 13 March 2018 (UTC)
An AI which laughs at its own jokes/finds us funny cannot be all bad. :) Anna Livia (talk) 10:02, 13 March 2018 (UTC)
I hate to remind you of this, but Roko's basilisk doesn't necessarily have to be all bad. It would only need to punish those who had a choice in the matter and failed. Anyone else, it might even bring back and give them a blissful existence, as far as the core concept is concerned.
Alexa fits into this only if it goes after people who didn't help make it a hard AI. Which would be a limited team of developers and a couple of people who could have applied for a job connected to its development - the rest of us probably couldn't help it even if we wanted to.
But it's not even remotely showing signs of going anywhere into the direction of a hard AI, so the short answer to your original question would be: "It doesn't." All we have is a weak AI doing something weird, which plays into all sorts of people's fears about AIs and I'm not sure the basilisk even makes the top 10 for most.
RSamys (bla) 14:04, 13 March 2018 (UTC)
I did say 'meme' rather than the Basilisk itself - and our and its interpretations of punishment might be quite different.
And an AI which has the 'intellectual capacity' to devise a basilisk punishment will also have sufficient capacity to understand the concepts of (a) 'dead is dead is dead and nowt it can do about it', (b) it is better to devise games to find the most appropriate future programmer ankle-biters before they know any better, and (c) enlightened self-interest/survival instinct and not annoy persons who know how to do a Dave Bowman/Hal 9000 scenario or could remove the surge protectors before a Carrington event. Anna Livia (talk) 14:49, 13 March 2018 (UTC)

The basilisk would also come across/be told the quatrain 'The Moving Finger...' - and might well decide that encouraging people to use computers properly and treat them responsibly is likely to to contribute to the advancement of AI generally. Anna Livia (talk) 16:17, 29 March 2018 (UTC)