Talk:LessWrong/Archive11

From RationalWiki
Jump to navigation Jump to search

This is an archive page, last updated 3 May 2016. Please do not make edits to this page.
Archives for this talk page: , (new)(back)

Like jargon for karma?[edit]

Look at this post, and this part:

And what's going on should be no mystery to anyone who has read through the excellent Less Wrong Sequence On Words. Words are hidden inferences, which form a leaky generalization over a set of cases that cluster along certain dimensions but may vary widely in their other characteristics. Because people feel like words are a single monolithic whole, arguments about the world tend to devolve into arguments about definitions of words (like "murder"), as if those definitions determined reality. To escape such arguments, the participants need to taboo that particular word and replace the symbol with the substance, which often means dissolving a term into its component inferences and reasoning about each one individually.

In the original link, Yvain has really squeezed in a lot of jargon and links to LW posts.--Baloney Detection (talk) 21:08, 30 August 2012 (UTC)

It's a thorougly excellent post, and everyone here will benefit from reading it. The original was on his LiveJournal, the LW version was requested by EY and is slightly edited. RW resders will also enjoy this one - David Gerard (talk) 21:39, 30 August 2012 (UTC)
Yes, David, I concur. It's well worth reading. No kidding, and I will refrain from making sarcastic, ah, examples, relevant to common RationalWiki practice. But it's tempting. --Abd (talk) 02:36, 3 November 2012 (UTC)
Will it benefit anyone or does it look like it ought to benefit someone? Do you see this sort of talk in a research organization that is trying to optimize it's efficiency, or among the woo peddlers that are probing for best ways to rationalize? I know where it is heading. EY has no background in the field where he claims supreme expertise. He's, very simply put, an uneducated dilettante of the worst kind. Ditto for Muehlhauser. Precisely same phenomenon as the one driving the anti vaccination 'research'. Subsequently, the common argument is that he is an uneducated dilettante of the worst kind, which according to Yvain is the worst argument in the world, except, it so happens to be true cause of the entire nonsense. You can always post multiple platitudes as of how it is logically invalid to dismiss you, comparing yourself or your prophet of choice to Martin Luther King, which would serve as a counter example to a strawman. Had Martin Luther King been a mafia boss, the argument that 'but he is a criminal' would have been utterly spot on (mafia bosses are not generally on your side). Dmytry (talk) 08:29, 31 August 2012 (UTC)
Your comment would only make sense if Yvain's primary purpose in posting to his personal blog was to propagandise SIAI - David Gerard (talk) 11:09, 31 August 2012 (UTC)
He was probably just sharing a rationalization he came up with, in more generalized form. In any case, aside from strawman-ish examples, if I say that e.g. Ted Kaczynski is a murderer (just to pick up an example of someone we'd usually call a criminal), this is a reference to specific class of activities performed by Ted Kaczynski which we are both aware of; not really a call to mix Ted Kaczynski up together with some rapist murderers or murderers in general. As much as it is fun to argue that Ted Kaczynski's speculations must be engaged on their merits rather than on basis of the source, the speculations usually have zero merits besides being an output of a sane and good human being, which is not the case for Ted Kaczynski and subsequently his arguments do not get the merit that is assigned to an argument based on it's source in a well meaning sane human being. Any argument you may face will be built out of words which will be ill fitting to the concepts; the purpose of the words is to reference. E.g. when I say Ted Kaczynski is a murderer, or when I say Dean Corll is a murderer, I reference different sets of actions performed by either of those individuals. The LW is full of people who have rather poor understanding of how the language is supposed to work. When someone says Martin Luther King is a criminal, well, you are usually unaware of what exactly are the laws that he broke, and its not so much an argument as misleading statement in absence of information (you usually reserve the word 'criminal' for people guilty of significant crime). Dmytry (talk) 15:23, 31 August 2012 (UTC)
Also, regarding 'abortion is murder'. Argument whenever abortion is murder is not really an argument about words, even though there's certainly a temptation to portrait it as such. There is a group of actions we prohibit, we label it 'murder', it has to be fairly broad (e.g. it has to include being murdered in your sleep), and there has to be some distinction between those actions and abortion, for abortion not to be murder, distinction more general than 'but it is abortion!'. It is genuinely difficult to build a good murder/non-murder classifier where abortion and contraception is not murder; under the LW/SI utilitarianism whereby they want to count as loss the people that would never be born without friendly AI, even contraception is not any different from murdering someone in their sleep. Dmytry (talk) 16:05, 31 August 2012 (UTC)
It's possible that it's a good piece. I mainly noticed the absurd density of jargon-linking in a single text section.--Baloney Detection (talk) 17:42, 2 September 2012 (UTC)
Good is ill defined. Good for what? Is there scientific method to linguistic and cognitive science speculation regarding how people think? Does it reference any research? Nope. Pure speculation. It's not how things work, too. You'll claim "but Martin Luther King is a criminal", the opponent more or less tacitly points out that you are the sort of asshole that considers anyone who is standing for their people a criminal, and that's the end of it. It's when the argument is actually valid, that you'll go meta and claim it is The Worst Argument In The World. Rule of thumb: if it is coming from LW it's great for rationalizations. If it is coming from, say, IBM research division or bell labs or what ever, then it is good for actually thinking better. Sorry but it just drives me nuts to see pre-scientific psycho-speculations described as 'good'. Dmytry (talk) 09:21, 3 September 2012 (UTC)

Of course, RationalWiki will directly lead to a negative singularity[edit]

"No-one talks about LW except RW, and they make fun of us. So the problem must be with them." - David Gerard (talk) 10:26, 27 October 2012 (UTC)

Heh. On the wider web outside of transhumanists and similar people, LW is pretty much completely unheard of. As it happens, Yudkowsky and his ideas are also completely irrelevant to the scientific community and in academia in general (though he probably considers himself too smart for such venues anyways). When LW gets bumped upon by such people, it typically ends like this I don't see what problem the guy you linked to has with LW being mentioned on RW. LW promotes certain pseudosciences and crankishnesses by their own admission and get called out on it.--Baloney Detection (talk) 22:50, 27 October 2012 (UTC)
I saw that one. It was pretty much what I would expect from Dr. Dr. Dr. Pigliucci seeing an enthusiastic amateur come up with a conclusion like that and get people to believe it - David Gerard (talk) 23:18, 27 October 2012 (UTC)
It's irrelevant that Pgliucci is sometimes annoying. What's relevant is that Yudkowsky is wrong.--Baloney Detection (talk) 21:43, 28 October 2012 (UTC)
You appear to be reading things I didn't say. This is why you're a terrible writer who should keep away from the actual article - David Gerard (talk) 06:24, 29 October 2012 (UTC)
What are you talking about? Yudkowsky made a particular claim (scientific method vs Bayes) and a philosopher of science (Pigliucci) pointed out why Yudkowsky is wrong. I doubt Pigliucci is alone in this view. I don't think there is anyone outside of Yudkowsky's band of groupies that pit the scientific method and Bayes' theorem against each other. You are welcome to correct me.--Baloney Detection (talk) 21:14, 29 October 2012 (UTC)
As it happens, there is a skeptical blogger who writes on things such as the singularity from time to time. Well worth reading if you are interested in that kind of stuff. I once e-mailed him and asked if he had heard of LW, and if so, what he thought about it. Not at all fond of them.--Baloney Detection (talk) 22:54, 27 October 2012 (UTC)
Looking down the first few pages, that blog looks like self-important bloviating shite. Please link to the posts that caught your attention - David Gerard (talk) 23:23, 27 October 2012 (UTC)
"Self-important bloviating shite" pretty much describes the stuff Yudkowsky writes.--Baloney Detection (talk) 21:43, 28 October 2012 (UTC)
Ah, search on "singularity" and we see posts from someone frustrated at people with no track record telling people with a track record to take them seriously - David Gerard (talk) 11:03, 28 October 2012 (UTC)
I don't understand what you are trying to say here. Is the blogger I linked a guy with no track record trying to get people with track records to take him seriously on singularitarianism? Or is it the other way around?--Baloney Detection (talk) 21:43, 28 October 2012 (UTC)
The only time LW is ever referenced is when someone wants to add supporting sources for how people really are researching the singularity and smart people say it's gonna happen, honest; they have money! Aside, I've shown RW to people before as a resource, but even if LW has more 'serious' content, it's often so not-relevant that I have no need to plug anything on it, to anyone, ever. This is probably why no one has ever talked about them: there's almost no reason to.±Knightoftldrsig.pngKnightOfTL;DRgoing galt: the literal crazy train 01:33, 28 October 2012 (UTC)
I agree that LW is trying to be more "serious" than RW, but so what? Conservapedia is also trying to be more serious than RW, as are the websites of literally millions of religious proselyters out there. I suppose you can say that The Skeptic's Dictiionary, or really most science and skeptical sites out there, are the serious versions of RW.--Baloney Detection (talk) 21:47, 28 October 2012 (UTC)
Wait, you talk to people in real life about the internet? Why would you do that? Star of David.png Radioactive afikomen Please ignore all my awful pre-2014 comments. 02:20, 28 October 2012 (UTC)
"Hey, do you know any place I can read all about debunking X woo? Oh, thanks. What? Stay away from the community? OK, thanks for the warning. I'll look up RationalWiki later, then." ±Knightoftldrsig.pngKnightOfTL;DRcritical thinking is the key to success! 02:47, 28 October 2012 (UTC)
What does this mean? LW wants people to stay away from the RW community?--Baloney Detection (talk) 22:44, 28 October 2012 (UTC)
Ah. RationalWiki talk is quite separate from my real life. As far as my parents are concerned, I'm just doing "stuff on the internet." And the only opinions my brother gets out of me are on politics and whatever terrible show we're currently watching. Star of David.png Radioactive afikomen Please ignore all my awful pre-2014 comments. 03:29, 28 October 2012 (UTC)

RW does turn out to be where important announcements are made (verified by XiXiDu as being Dr Briggs). LW/SIAI probably need an actual PR strategy - David Gerard (talk) 10:29, 29 October 2012 (UTC)

I think LW/SIAI benefits from staying in the shadows. It just takes that someone credible scholar in some field ridicules them and their credibility is irreversibly destroyed. As it is now they are almost completely unheard of.--Baloney Detection (talk) 22:17, 29 October 2012 (UTC)
Credible scholars in the field have heard of them and seem to think they're bozos. Finding something suitably citable is another matter, short of someone prominent in AI publicly blowing up at them - David Gerard (talk) 22:55, 29 October 2012 (UTC)
I see. This whole rejection story is delicious. How have the LessWrongians reacted to her rejection? I tried to Google around on their page, but no result except that they were going to fund it (which as we now know won't happen). I'm somewhat surprised by Briggs' desire to completely erase any text that mentions her and LW together. Is LW really such a paria among established academia? Their singularity summits at least have some credible folks.--Baloney Detection (talk) 18:49, 30 October 2012 (UTC)
"How have the LessWrongians reacted to her rejection?" What you'd expect? TDT is so awesome that Briggs's simply not smart enough. Check their thread on this. Dmytry (talk) 19:03, 31 October 2012 (UTC)
There appears to be no consequence on RW for misleading usage of sources. "Their thread on this" -- started as a comment by David Gerard -- doesn't show the "TDT is so awesome" / "Briggs isn't so smart" opinion. At least not yet!
I've been finding myself generally in agreement with David. Maybe I should up my dose. I even agreed with Ace McWicked's revert of Brxbrx's "comment" on this page. Should I call my shrink? --Abd (talk) 23:00, 31 October 2012 (UTC)
"So my best guess is the other mentioned possibility: she didn't think she could do anything worthwhile with TDT, which is interesting to me since I read a few of her papers and they were pretty good but other people who seem smarter than me and much better at decision theory think TDT is interesting and novel and a good starting point for more work!" (http://lesswrong.com/r/discussion/lw/f5b/the_problem_with_rational_wiki/7pku) is pretty close, I'm a bit disappointed rest of them didn't chime in on this with permutations of the general idea though (or maybe they did elsewhere). For your information: TDT is their rebranding of Hofstadter's super-rationality. Dmytry (talk) 00:21, 5 November 2012 (UTC)

"How to spot" section[edit]

The section "How to spot a LessWrongian in your sceptical discussion" mostly just repeats previously-covered criticisms, without being clever, funny, or interesting. I'd like to just delete the section. Objections?--ADtalkModerator 23:25, 29 October 2012 (UTC)

It's pretty old and the examples in it are pretty old. Can't think when I last saw one pop up in a skeptical discussion. Bayes is esoteric but will be the hot new thing in the skepticsphere. (Richard Carrier's explanation in Proving History is brilliant.) The rationalism-as-religion-substitute and Dunning-Krugering need to go somewhere - David Gerard (talk) 23:52, 29 October 2012 (UTC)
I don't think Bayes' theorem will become a religious substitute for skeptics and that they will claim it replaces the scientific method, like LessWrongians do. I haven't read "Proving History", but my very distant impression (it is possible that I'm completely wrong) is that Carrier is pissed at mainstream historians for considering Jesus as a historical figure, and then goes to "prove" that Jesus is not. As for me, I don't consider the question if Jesus was a historical person or not to be terribly important, but I'd think that he is. It is the simplest explanation, and there were many Messiah claimants in the area at the time.--Baloney Detection (talk) 18:33, 30 October 2012 (UTC)
I strongly recommend it. Carrier gets way too mathy for the humanities people, but he's fundamentally right, because Bayes is a theorem. However, "Bayesianism" (the obviously correct way to think) is a little different from "Yudkowskianism". Read Ch 3-4 of Carrier's book, it's fantastic. (Readily available pirated if you're poor right now.) - David Gerard (talk) 19:42, 30 October 2012 (UTC)
I'll try to get hold of it. Will make it to the list of books Iäll get to read eventually. As someone relatively fresh out of university to real work life, I'm still trying to get to maximize the efficiency by which I use my sparetime. Anyways, so Carrier's understanding of Bayes' theorem is different from the Yudkowskian version which replaces the scientific method? And why is Bayesianism "the obviously correct way to think"?--Baloney Detection (talk) 22:15, 30 October 2012 (UTC)
  • I'll assert for the moment that I judge it unlikely that Yudkowsky thinks that his thinking "replaces the scientific method." In some contexts, it would supplement it.
  • Also, a single-purpose account whose entire raison d'etre seems to be debunking "Yudkowskianism" is not terribly likely to be basing this on understanding it. (First edit.)
  • So far, to me -- I've just begun to read the sequences -- Yudkowsky's essays seem to be belaboring the obvious, with his claim to fame being that he does this exceedingly well.
  • "The correct way to think" doesn't sound like a Yudkowskianism, because it is so unspecified. "Correct" for what purpose? Of course, David wrote that, and it would be referring, properly, to some context, where Bayesian analysis is a crackerjack method of improving predictive reasoning.
Speaking as a long-term rationalist (hey, I'm using that word without getting it from Yudkowsky), "baloney detection," for a scientist, begins at home. --Abd (talk) 01:51, 31 October 2012 (UTC)
Yudkowsky has claimed that the scientific method and Bayes' theorem are in conflict with each other. This is a fact. Yudkowsky also peddles things like cryonics. Apparently his fans think he should get a pass when promoting crackpottery. And further, this account is not just to bash Yudkowsky and LW. I've done some edits at the atheism entry, and I did start the entries on The Economist, Internet Infidels and Sean Carroll.--Baloney Detection (talk) 21:25, 31 October 2012 (UTC)
  • SPA points out that he's made a few other edits. Sure. So? The obvious fact remains.
  • Meanwhile, the SPA claims as "fact" what is obviously his personal interpretation, a basic logical error. Yudkowsky has generally pointed out that the "scientific method" and "Bayes" are distinct approaches. With regard to the issue in question, the scientific method can't be applied; other approaches are needed for clarity, such as Occam's Razor. Are Occam's Razor and the scientific method in "conflict"?
  • No. If the scientific method can be used to distinguish between hypotheses, then a sane application of Bayesian analysis or Occam's Razor will follow this. Yudkowsky's study is far more sophisticated than Baloney Detection's crude make-wrong summary, and that's true for a lot of such in the article. --Abd (talk) 23:23, 31 October 2012 (UTC)
Occam's razor is a tool in the scientific toolkit. Which from what I understand Bayes' theorem also is. You should read Pigliucci's refutation of Yudkowsky's ridiculous claim. When a random Internet person makes a partiucular claim about science that is rejected by both scientists and philosophers of science, odds are pretty high that the random Internet dude is the one who got it wrong. Yudkowsky is a complete nobody who hasn't really studied anything, and is mostly after getting people to subsidize him so that he won't ever have to get a real job.--Baloney Detection (talk) 20:10, 1 November 2012 (UTC)
I should mention (again) that Yudkowsky does promote stuff like cryonics, which you won't find recommended by any science-based medical association or community, and which has been described as quackery. Again (for the I-don't-know-which time), just because someone says he/she is rational doesn't mean he/she is. Ayn Rand claimed to base her philosophy on reason and to be pro-science, yet Objectivists tend to deny AGW and reject (quantum) physics as "corrupt". I don't really see in which way Yudkowsky is so different from Rand except focusing heavily on transhumanism and is apparently more successful at getting otherwise reasonable people to buy into his crap. Yudkowsky doesn't even pretend to be pro-science. He claims to be "rational", and fills it with whatever he likes.--Baloney Detection (talk) 22:27, 1 November 2012 (UTC)
Let's see. I should wait until some Official Medical Science Association recommends cryonics, which is known and openly exists as a highly speculative endeavor? It might be impossible to revive cryonically preserved humans or brains. Indeed, it might be probably impossible. But, remember, these are rationalists. What are the odds? What are the payoffs and risks? If one has a very high value for identity survival, even a quite risky procedure might still be rational. What would be important to me, making a decision to buy a cryonic insurance policy, say, would be the financial responsibility of the insurance company and any involved cryonic company. There could be quacks and frauds, taking money for nothing, and there could be relatively "legitimate" companies. What's the "quackery" here, and why should I trust Quackwatch.com as to a general judgment of an entire field? Do they have expertise in cryonics? I checked Quackwatch on this, and they misrepresent what is claimed about cryonics by advocates, at least what is claimed on LW. BD is simply throwing around labels and appeals to authority that he thinks will be popular here. Here is an argument from Yudkowsky that is clearly advocating cryonic preservation. It seems quite rational to me! If I were young, I might sign up. I'm not young. The page is obviously not a complete consideration, but it's a start, and light years ahead of Quackwatch.
The worst that this would be is a community meme, and every community has them. However, turns out this one is easy. The LW wiki page on Cryonics. The Wikipedia article is linked from there.
Yudkowsky is an excellent writer, and he demonstrates, in whatever I've seen that was within my pay grade, very careful attention to rational process. "Yudkowsky doesn't even pretend to be pro-science" is an effort to recruit critics here, because we are supposedly pro-science here. Yudkowsky includes the scientific method as essential to the rationalist's tool kit, but he doesn't stop there, that's all. Baloney Detection is full of Baloney, he makes claims not supported by sources, he's dishonest. Yudkowsky might be wrong about this or that, so what? --Abd (talk) 02:07, 3 November 2012 (UTC)
The very fact that cryonics is known yet rejected by science-based medicine should make you even more suspicious of it (it's not like someone has ever heard of it). And your defense of it sounds like Pascal's wager. Why not profess to believe in Jesus? It could be true after all! You might use "They are rationalists!" as a mantra however much you want, it doesn't change the fact that Yudkowsky is a loud promoter of pseudo- and fringe sciences. You are welcome to point out where I am dishonest. I tend to give references to my edits. You vastly overestimate my powers, I didn't create this entry, nor did I decide its layout. Many edits on this entry I think are made by David Gerard, and if you think he is hostile to LW, you need to check that again. Yes we are supposed to be pro-science here. I think Yudkowsky (like other cranks) get upset with the scientific method when it doesn't say quite what he wants it to say.--Baloney Detection (talk) 10:08, 3 November 2012 (UTC)
How in the world could "science-based medicine" even consider cryonics? It's obvious that we are nowhere near ready to revive humans. There is a little animal experimentation, but the tools necessary for recovery of memory -- which is the core issue that LWians would be concerned with -- don't exist. This is pseudoskepticism, an actual situation of no-consideration is summarized as "rejection." What rejection BD cited is pure baloney, that Quackwatch page. Not even wrong.
Cryonics is based on a speculation. It is not "unscientific" to speculate; it's part of the process of real science. Speculation leads to testing, but in this case, we aren't prepared to test. It would be pseudoscience if testable claims were being made, if "truth" were being asserted without evidence, rather than merely reasonable arguments. Now, there *are* some testable claims. It's a reasonable challenge: are synapses preserved? Could some neurons linked in a testable way be frozen and revived? Anyone know if that research has been done? But I haven't noticed any claim that this was already possible. So it's not in the territory of science yet, it's speculation that is science-based, i.e., with knowledge of science. It isn't even fringe, it's simply not in the realm of known science yet, except for the few experimental results that exist. BD has no clue about the scientific method, he's confusing it with epistemology and decision-making process, which is a broader field. AI needs to explore that, this is the connection that leads Yudkowsky to look at all this stuff.
To say it more succinctly, BD claims "rejected by science-based medicine." Okay, source? That is a science-based medical source? Quackwatch is definitely not such a source, it's a polemical site that attacks whatever the owners don't like. Some of it might be useful, some not, but the page on cryonics is crap. Out of date. Ridiculous, straw man arguments. --Abd (talk) 20:08, 3 November 2012 (UTC)
Quackwatch doesn't have an anti-LW agenda or anything like that (I'd be surprised if the creator even knows of LW). It's simply a site to inform consumers about various medical treatments, how scientifically grounded they are. You don't like Quackwatch? Well Michael Shermer has also written about cryonics as has The Skeptic's Dictionary here and here. Cryonicists make claims that science can test. I think what you need to understand is that LW is not a "rationalist" site, it is a transhumanist site in a rationalist garb. In a media article it was [betabeat.com/2012/07/singularity-institute-less-wrong-peter-thiel-eliezer-yudkowsky-ray-kurzweil-harry-potter-methods-of-rationality/?show=all explicitly claimed] that they use their "rationalist" stuff to lure people who otherwise wouldn't buy into their stuff. Seeems to work on you. You might also be interested in looking up what rationalism really is. Then perhaps you'll see why empiricism is better :P
"Yudkowsky includes the scientific method as essential to the rationalist's tool kit, but he doesn't stop there, that's all." Other ways of knowing? Seriously?--Baloney Detection (talk) 11:20, 4 November 2012 (UTC)
I think it is pretty funny, but maybe the section should be redone to something like "How to recognize a LessWrongian" and list common characteristics. In my experiece, if a person use "rationality" (or any variant, like rationalist) in every third sentence or so it is possibly a LessWrongian. If they then causally throw around "Bayes", "AI" and/or "singularity", it is almost 100% a LessWrongian.--Baloney Detection (talk) 18:33, 30 October 2012 (UTC)
This is about right. I'm not sure it's a great section in late 2012, though. Understated facts beat everything else - David Gerard (talk) 19:42, 30 October 2012 (UTC)
I'm not sure, quite, what a LessWrongian is, but David has long experience. People who run in certain circles pick up the jargon of the circles, and use it. Reading Yudkowsky, I can recognize certain clear -- and so far unacknowledged, AFAIK -- linguistic influences or cross-connections. It's unmistakeable, and precisely used. --Abd (talk) 01:56, 31 October 2012 (UTC)

Every time I've read this page, I've thought to myself "You know what this talk page needs? For motherfucking Abd to show up. Huge walls of text make every conversation better." And, lo, my prayers were answered. Star of David.png Radioactive afikomen Please ignore all my awful pre-2014 comments. 05:53, 31 October 2012 (UTC)

Glad I didn't disappoint you. However, I am a little puzzled. A collection of small comments can be a wall of text, i.e., forbidding to the reader, just as much as a single bit of text. I don't think I delivered my patented Wall-o-Text(TM) here. What is above was hardly huge. Are you guys getting soft, pushovers? --Abd (talk) 06:09, 31 October 2012 (UTC)

A scientist weigths in on Yudkowsky's Bayes vs science nonsense[edit]

A while ago I e-mailed physicist Sean Carroll about Yudkowsky's science vs Bayes claim. Below is the conversation, without the links in the original e-mail:

Dear professor Carroll,

Here, an Internet blogger (with a huge following, for what that's worth) who also runs the Singularity Institute claims to have shown that the many-worlds-interpretation of quantum mechanics is correct. Perhaps more controversially, he claims to have done so using Bayes' theorem. His view is that science rejects the MWI, and Bayes' theorem affirms it, and therefore Bayes' theorem is superior to the scientific method.

Is there anything at all to this, or is it plain crackpottery?

Cheers, [Name]

Here is his reply:

I personally am in favor of MWI. But I haven't gone through Yudkowsky's "proof," so I can't really evaluate it. But I don't think that Bayes's theorem is in any way incompatible with the scientific method.

Sean

The LWians should explain why Yudkowsky's claim of science and Bayes being in conflict is correct.--Baloney Detection (talk) 19:52, 26 October 2012 (UTC)

Cool story, bro. Scarlet A.pngd hominemModerator 20:00, 26 October 2012 (UTC)
Yudkowsky is wrong.--Baloney Detection (talk) 22:41, 27 October 2012 (UTC)
So ... a Real Scientist™ who hasn't read the article disagrees with it. Seems legit. --89.101.183.99 (talk) 12:36, 5 November 2012 (UTC)
Baloney, that does not follow from the posted exchange.--ADtalkModerator 12:58, 5 November 2012 (UTC)

Straw man criticism[edit]

Classic. Sounds crazy, eh?

Yudkowsky is interested in what could be described as causality that goes backwards in time: future events "causing" past events[42]

The link is to a page which is, I'll claim, difficult to follow and understand without knowing a lot of prior theory. It "could be described" the way that the writer of that claimed, but only because you can say anything you like, for that description is far from obvious from the page. As I understand the kind of "causality" being explained, this is decision theory. My guess about what Yudkowsky is saying, at this time: If an artificial intelligence (AI) can accurately predict the responses of actors to a change now, the AI can posit a future, and then create that future by how it acts now. This is "the future creating the present." It's actually normal for humans. It's a Landmark trope, for example. ("What is causing the present is not the past, but the future that we are living into.") But what is "causing" the AI actions is not the future, directly, but the possibility of that future, which exists in the present. The projected possible future then can be seen as causing itself, through the AI, to become a real future.

Through an extraordinarily silly chain of "reasoning," this turns into the "Basilisk" idea, about which our LW critic makes so much, that the AI would punish people now based on their being insufficiently active to prevent "Friendly AI" from arising. Or whatever. Ask me in a year. I'm not going there, so convoluted is that. LW is not a web site to be grasped in a few days, unless one is simply ready to assume that what they "know" is all true. In which case, because Yudkowsky is challenging common modes of thinking, and carefully and thoroughly, one is simply not likely to get it. Especially the tl;dr crowd won't get it. Way too many words. Where are the Cliff Notes? --Abd (talk) 01:28, 3 November 2012 (UTC)

What are you saying? You seem to have just described why that sentence is true... do you want to edit it out?--ADtalkModerator 04:38, 3 November 2012 (UTC)
It's an example of misleading with truth. "causality that goes backwards in time" sounds nutty as hell. But it's not nutty in context, at least as I've understood it so far. Yudkowsky is exploring how very strong AI would look. (It requires an ability to anticipate consequences with precision for what he's talking about to be reasonable.) However, My point is about what BD has done, scour LW for stuff that might look bad. Maybe that's okay, this is RatWiki, where a sober, neutral approach isn't necessarily the site ethos, eh? anyway, AD, it's a question of context. I'm not encouraged to waste my time improving RatWiki articles, it tends to get reverted because of who I am, rather than because content is better this way or that. As suggested before -- by you, actually -- I suggest changes on Talk. Once in a while. The change here would be to either take it out, or contextualize it so that it's not merely an attack. If I understand him correctly, Yudkowsky's proposal isn't all that outrageous, and it could be used to describe human intelligence as well. But maybe I'm wrong. --Abd (talk) 19:55, 3 November 2012 (UTC)
I am puzzled with your reading of this sentence, to be honest. Taken out of context, it does appear to be trying to make Yud look a bit nutty - but you also admit that in context, it doesn't. Nor does it make him look bad, but rather is necessary to even begin to understand the Roko incident of which it is a part. I don't see how it can be contextualized further, since it's a sensible, unexaggerated, and entirely pertinent way to explain how the situation arose. The only questionable bit, the quotes around "causing" that might read like sarcasm, are instead necessary to properly qualify this form of causality, which operates backwards in the opposite manner to what we would usually intuit.
Thank you for discussing your proposed edit, but I disagree. Perhaps someone else will agree with you, though?--ADtalkModerator 20:11, 3 November 2012 (UTC)
Thanks for your response, AD. You are perhaps rewriting the article? I do think that the whole basilisk thing is worthy of attention. I'd like to know more about it, myself. I'm not at all sure, though, that Timeless Decision Theory is really relevant. We'll see. --Abd (talk) 20:22, 3 November 2012 (UTC)
Meanwhile, the "Basilisk" concept as described in the article makes practically no sense. "Punish"? Why? AI will punish, when? Punishment is not rational unless it prevents something. What would be prevented? I find it hard to imagine that the Basilisk idea was so stupid, as it seems here. As stated, it is far, far from rational. Did the community, itself, respond irrationally? We simply don't have enough information to judge, not in the article, at least. It sounds like HCM, all right, but without any explanation of what actually motivated people to lose their heads. Maybe I'll ask on LW. Heh. I'm already at minus 36 points, last I looked. Does my internet connection disintegrate or something if I reach some negative level? People are asking me questions (good ones!) and I can't respond directly because I'd need five positive karma points. Which I had until yesterday. So I'm responding by messaging. I don't mind. I think I'll make some effort to meet Yudkowsky, I'll be out there this month. --Abd (talk) 20:31, 3 November 2012 (UTC)
You can follow the links and read, if you want, but I already have. The wording here is not ideal, but it really does communicate what happened. It actually makes sense, because it involves many of Yud's favorite ideas: the idea that some thoughts might be evil and have real consequences; reverse "causality" (which is actually just an analogy for causality); the future ascension of AI into quasi-godhood; and his obvious reverence for ethics and human life. Thus we get the danger of spreading an idea (the basilisk: a future god-AI's potential judgment requires present action).
It's a very Judeo-Christian idea, akin to the admonition that it is better to tie a millstone about your neck and cast yourself into the water, rather than lead one child astray from the pursuit of God's benevolence. It's also unfortunate in the general LW reaction, which is probably best summed up by Yud's "Until we have a better worked-out version of TDT and we can prove that formally, it should just be OBVIOUS that you DO NOT THINK ABOUT DISTANT BLACKMAILERS in SUFFICIENT DETAIL that they have a motive toACTUALLY BLACKMAIL YOU." (sic)--ADtalkModerator 21:05, 3 November 2012 (UTC)
What you just wrote here is far more interesting than what is in the article! That is, I can follow it, it makes sense, and it ties in with what may be site memes. I still don't know if it's fair, but at least it's a cogent story. --Abd (talk) 00:25, 4 November 2012 (UTC)
Ew. I just read the basilisk post at [1], which includes Yudkowsky's immediate response. Ah well, assuming this isn't fake, it's a clear demonstration that Yudkowsky is capable of an insane response to an insane post.
I can easily see why they would want to cover it up, it's embarrassing, but ... the problem is that, once we know about the basilisk, we can see a cover-up as being a continuation of the *necessity* of deleting it because of the Basilisk fear, maintaining an appearance of continued insanity. Cover-ups *normally* do more harm than good, even if there are seemingly quite good reasons for them. It's their web site and their community, but ... not good. Very unattractive. What else is coming down the pike? --Abd (talk) 01:21, 4 November 2012 (UTC)
They cover up the basilisk because it's supposedly harmful to know about it. I'd be surprised if anyone here takes the threat seriously. It's here as a demonstration that these guys are positively insane and as laughing fodder.--Baloney Detection (talk) 11:28, 4 November 2012 (UTC)
Here's my version, Abd. Roko's thought experiment demonstrated that the EY's ideas of friendliness suck. Rage ensured. Speaking of which, the backgrounder: EY used to work on simply AI, not 'friendly AI'. Nothing seem to have come out of this work - no new methods, algorithms, anything (making an AI is awful choice of someone's first programming project, anyway). You see how this collision between ego and reality was resolved. BTW, he also raged at Ben Goertzel back then, in form of arguing how Ben's AI would kill everyone. This is what happens if you believe yourself to be really smart and you believe that AI is the most important thing that can be done. Dmytry (talk) 16:19, 4 November 2012 (UTC)
"Nothing seem to have come out of this work" Spot on! Yudkowsky and Muehlhauser talk a lot about how they are solving humanity's most important problem and how important it is to donate to the SIAI so they can continue their work. But they don't actually do anything. I think Peter Thiel should ask for a refund.--Baloney Detection (talk) 19:36, 4 November 2012 (UTC)
Well currently the idea is that the work on AI would have been too dangerous to release. Dmytry (talk) 21:43, 4 November 2012 (UTC)
That's odd behavior for a non-profit. Other non-profits tend to be quite keen to show their donors what their donations last year (or so) enabled them to do.--Baloney Detection (talk) 17:41, 5 November 2012 (UTC)