Talk:LessWrong/Archive6

From RationalWiki
Jump to navigation Jump to search

This is an archive page, last updated 3 May 2016. Please do not make edits to this page.
Archives for this talk page: , (new)(back)

The problem with "rationality"[edit]

For quite a long time I have disliked the word "rationality" (and its variants such as rationalism and so on). The word does have specific meanings in specific contexts, such and economics and philosophy, but in the vast majority of cases it is a rhetorical club (really, read it). LessWrong has its own definition of the word (though I don't see why "epistemic rationality" is not a subset of "instrumental rationality", as truth-seeking is merely another goal). How is it any more valid than Ayn Rand's definition of it?

Having some knowledge of the history of philosophy, I would much prefer to call myself an empiricist over a rationalist, but I guess knowledge of history isn't a virtue at LW. Mind, you, they use the word "rational" (and variants) at every opportunity, even for toothpaste!--Baloney Detection (talk) 19:40, 6 July 2012 (UTC)

In all fairness, IIRC Eliezer himself complained about this being an issue. You got to give him some credit when its due. Dmytry (talk) 20:33, 6 July 2012 (UTC)
You mean when he talk about "the way"? It sounds like Buddhism to me. I still find it funny how he talk about "communities of rationalists", yet dismiss everyone who is not into his home-brew (like scientists and skeptics) as "traditional rationalists". The "communities of rationalists" they refer to are those seeded by Yudkowskian memes, and would not even exist if Yudkowsky had never been born. Hence my critique of basing all their stuff on the ideas of one guy. By comparison, science would still exist even if Dawkins, Sagan, Feynman, or even Einstein or Newton, had never existed.--Baloney Detection (talk) 12:30, 8 July 2012 (UTC)

Comparing Yudkowskian memes to actual science[edit]

A new organization based on Yudkowskian memes has recently been seeded, a quick glance shows heavy referencing to Yudkowsky's writings.

What I wonder is if it's really true that cognitive scientists talk about epistemic rationality and instrumental rationality (or anything equivalent), or if it's just Yudkowsky's own phrases? Heck, is there even a clear definition of what rationality means? It's rather deceptive when the linked-to organization writes as if their ideas are based on science, and refers to Yudkowsky, and man who explicitly rejects science and spouts a certain pseudoscience.

Similarly, does mainstream science-based AI research ever refer to "friendly AI", or any other Yudkowskian meme? I know very little about AI, so information from people with more insight into the field would be appreciated.--Baloney Detection (talk) 18:09, 7 July 2012 (UTC)

Friendly AI merited a mention in the most recent version of Norvig and Russell's AI: a Modern Approach, which is the definitive AI text. See this LW post summarizing if you don't have access to the book directly. ApologistShill (talk) 22:53, 14 July 2012 (UTC)
Epistemic/instrumental rationality is a real distinction in philosophy. General "rationality" is not really a technical term -- see for yourself. "Friendly AI" is not an issue in mainstream AI, mostly because it's so insanely speculative at this point it would be like packing for your next trip to Mars. Nebuchadnezzar (talk) 18:44, 7 July 2012 (UTC)
The rational agent usually is the one that two boxes on Newcomb's and defects in Prisoner's Dilemma. I concur that friendly AI is not an issue. In related, their idea of 'utility function' is likewise ultra speculative or outright confused: a mathematical function can't have real world as it's input domain. The 'utility function' in AI is a mathematical function that takes in world model's output and spits out how desirable it is, so that you can use various algorithms to solve for action inputs to a compound function, utility(model(action)), for which this compound function will be maximal. For example you can have a simulated wind tunnel and you can solve for 'best' wing shape. It's not same as making a wish to an overzealous genie; AI converting universe to airplane wings is not an issue (even if unlimited computational power or extremely better methods for finding maximums are speculated). Dmytry (talk) 03:32, 8 July 2012 (UTC)
Hmm, so Newcomb is not just something invented by Yudkowsky? But I see your point, they are very speculative and pass off their ideas as valid concerns (and then slam mainstream science for not bothering very much with their concerns). Found another piece: I was raised in Traditional Rationality, and thought myself quite the rationalist. I switched to Bayescraft (Laplace/Jaynes/Tversky/Kahneman) in the aftermath of... well, it's a long story. I'm vaguely familiar with Kahneman, not at all with the others. Is it really the case that the people he refers to have put science against Bayesianism like Yudkowsky does, and rejects the scientific method? Or is he just talking out of his ass?--Baloney Detection (talk) 12:42, 8 July 2012 (UTC)
That's a pure gem as far as annoying me with stupidity goes. I wrote a page back on how you need very smart heuristics to do Bayesianism on an incomplete, possibly incorrect belief graph with loops and cycles (of which the LW seem blissfully unaware), should stop already. But seriously, read Pei Wang's article I linked. edit: actually, let's take it elsewhere, i feel we are writing too much about this stuff here. You can just ask at some mathematics or philosophy forums what the hell is 'bayescraft' and why it is so awesome, linking this and not telling you think it's bad or otherwise priming. Dmytry (talk) 15:24, 8 July 2012 (UTC)

If Yudkowsky has demonstrated that the MWI of quantum mechanics is correct...[edit]

...then I think the Nobel Prize committe wants to hear from him.--Baloney Detection (talk) 13:37, 8 July 2012 (UTC)

http://www.poe-news.com/forums/sp.php?pi=1002430803 (and there was couple other links to other people noting same error). He managed to get wrong everything for which 'wrong' is well defined (and into which you can't read sense). The argument that 'Bayesianism' and 'Solomonoff induction' support MWI is also wrong (Solomonoff induction is only concerned with the codes that produce output that begins with the observed data. i.e. the code needs to pick one world out of many. The 'wavefunction collapse' is MWI plus such code. That doesn't mean we shouldn't derive collapse from first principles anyway). Dmytry (talk) 11:53, 11 July 2012 (UTC)
I got that meme from PoE actually :) I'm not a physicist at all so I may be on a limb here, but it strikes me as odd how probability can determine interpretation a la Yudkowsky.--Baloney Detection (talk) 22:21, 13 July 2012 (UTC)

Brilliant ripping apart[edit]

The trans-accountants comparison is quite good.--Baloney Detection (talk) 20:49, 8 July 2012 (UTC)

Luke's interview with Pei Wang is a similar level of reaction, though much much nicer about it. It comes across as "I'm trying to be polite here, but you people are crackpots". Dmytry's linked it a coupla times above - David Gerard (talk) 21:11, 8 July 2012 (UTC)
I linked something else the other time, actual published and cited paper by Pei Wang. Dmytry (talk) 21:29, 8 July 2012 (UTC)
That "dialogue" was very enlightening. Apparently mainstream scientific researchers don't give much for the ideas of LW/SIAI. And Muehlhauser's "Yudkowsky proclaimed", WTF? Meanwhile a LWer is trying to keep his fellow LWers in line, and not insult Wang.--Baloney Detection (talk) 21:40, 8 July 2012 (UTC)
I talked with a friend of mine who did some automatic solving related stuff (not the scifi AI but the stuff that can find solutions), to my surprise he even met EY in person because of common friends, while living in bay area, he finds EY 'such an annoying guy' and compared him to some guy we both knew ages ago who would often BS about advanced physics, math, etc etc etc without having slightest clue, and also complained about EYs "acolytes" and on how ridiculous of a thing they make out of it(rationality) in practice. The LW would of course frame anything like that as a clash of tribal fractions, a clash it may be - the people who never even seen a mammoth theorising how to hunt it. Dmytry (talk) 20:06, 9 July 2012 (UTC)
That's an interesting story. I've heard at least one other accounts of Yudkowsky being an arrogant person who creates his own AI theory without feeling much need to compare it to ongoing research (I can give exct quotation if wanted). LWers often talk about tribes and signalling, and sure such things exist (I think to a certain extent they are unavoidable). What they seem to be completely blind to is how endemic it is at LW. What is the LW jargon if not a marker of tribal identity?--Baloney Detection (talk) 20:45, 13 July 2012 (UTC)

Yudkowksy serioulsy needs to re-examine his relationship with reality[edit]

Gem: I plan for the entire history of human civilization to be less than 1% of my lifespan.--Baloney Detection (talk) 12:50, 14 July 2012 (UTC)

Ghmm. I wonder if he just applied doomsday paradox to this over his observer-moments, to obtain 'rational prior' for AI risk. Dmytry (talk) 16:31, 14 July 2012 (UTC)