User talk:Baloney Detection
PsyGremlin말하십시오 08:43, 16 May 2012 (UTC)
Sysop[edit]
You are now sysop. Have fun RandonGeneration (talk) 22:06, 19 May 2012 (UTC)
- ...and read this. Peter with added ‼Science‼ 22:39, 19 May 2012 (UTC)
Let's discuss LW somewhere else or not discuss it because fuck it.[edit]
I feel we are eating too much space here, it's unhealthy. It is 'i am crazy or they are crazy' situation, and that's rather maddening and makes us look crazy too. You can ask at some mathematics forum about this "bayescraft" and ask why would it support many worlds interpretation, or ask whenever that guy is awesome statistics expert or just talking technobabble to justify weird beliefs like cryonics. Dmytry (talk) 15:33, 8 July 2012 (UTC)
- I see your point. It's just a bit upsetting how they get a pass (sort of) for calling themselves "rational". As for a math forum, it would be rather inappropriate for me because I'm prett bad at math generally and probably wouldn't understand much. That's why I ask. Unlike Yudkowsky, I don't proclaim myself to be an expert on subjects I'm not. I consider myself pretty knowledgeable at history, and gladly correct misunderstandings I come across. I think the role of Bayes' theorem is a subject for philosophy of science (with Yudkowsky doesn't know anything about).--Baloney Detection (talk) 20:31, 8 July 2012 (UTC)
- I lean pretty hard in the direction of Bayesianism, but the "reformulating science as Bayesianism" stuff is a case of way too much of a good thing. See here for objections. Nebuchadnezzar (talk) 20:42, 8 July 2012 (UTC)
- Do you think the scientific method and Bayes' theorem are in conflict?--Baloney Detection (talk) 20:49, 8 July 2012 (UTC)
- No, that's sheer nonsense. Nebuchadnezzar (talk) 21:55, 8 July 2012 (UTC)
- the LW's Bayesianism is to Bayes as objectivism is to objects. I don't think they know how to handle cycles, or even thought about that. Plus the science works like a strategy like "I am willing to discard a valid theory that many percent of the time" (and the cut off percent can be set based on expected utility loss in the case that the theory is true but rejected by accident, versus the cost of research). How probable is a theory given data is not well defined without how probable is a theory without data, and that is pure matter of opinion (solomonoff induction or not), beyond the usual 'larger theory is less probable'. Dmytry (talk) 20:17, 9 July 2012 (UTC)
- No, that's sheer nonsense. Nebuchadnezzar (talk) 21:55, 8 July 2012 (UTC)
- Do you think the scientific method and Bayes' theorem are in conflict?--Baloney Detection (talk) 20:49, 8 July 2012 (UTC)
- I lean pretty hard in the direction of Bayesianism, but the "reformulating science as Bayesianism" stuff is a case of way too much of a good thing. See here for objections. Nebuchadnezzar (talk) 20:42, 8 July 2012 (UTC)
I'm going to try answering some of the questions you asked:
Bayes rule finds how probable is the theory given data and given prior knowledge of how probable is the theory. It is also a proof that you can't know how probable is the theory given experimental data, without having prior knowledge of how probable is the theory.
While the Bayes rule is incredibly useful (and I have used it multiple times in my work), it has interesting bad properties also: namely, if you are incredibly certain in a theory, then no amount of data may be able to result in substantially low belief. For example, if you start off with sufficiently high probability of God, then no amount of evidence can convince you otherwise, as all evidence has common failure mode (God is testing me) and the certainty of evidence will never exceed some level (there is a limit minimum to probability of all evidence falling).
People, fundamentally, act in precisely this Bayesian manner when being stubborn. There is no good practical solution to this problem.
There is an alternative method: you can say, Okay we'll agree to set up an experiment so that there will be up to 1 in a million chance that we will agree to reject your theory even if it is true (alternatively, we can agree to set up experiment so that there will be up to 1 in a million chance that you will accept a false theory). And with this we'll can settle our disputes correctly most of the time and arrive at agreement without having to use prior (without-the-evidence) probabilities for the hypotheses. This is the scientific method in a nutshell. There is a place for Bayes rule in science too, and indeed Bayes rule is heavily used in science when designing the experiments and such; you just can't quite use Bayes rule in the circumstances where the prior probability is unknown. But the reason they go for Bayes rule all the way is that they do not like this method of settling disputes, and they don't like it because it settled disputes against the woo they promote. Dmytry (talk) 09:31, 11 July 2012 (UTC)
- Thank you very much for that response. Math has always been a weakness for me. I wonder though, how do you get probalilities/priors? Isn't that often very hard in real life? I recall reading a debate between a Muslim and an atheist, and the Muslim causally threw around probabilities for how unlikely it would be for it to contain X if it was not of divine origin. Many wondered from where he got those probabilities. They appeared to just be made up at the spot.--Baloney Detection (talk) 13:13, 14 July 2012 (UTC)
- Well, for example, if I am to use Bayes rule in image classification on a dataset, I will get probabilities from sampling my dataset - what percentage of images are like what? Suppose I am to write an image filter to block 'nasty' images from children's eyes. I get how often this algorithm will face a nasty image, that is the prior, then apply the probabilities of particular feature within nasty image (e.g. skin coloured objects taking high percentage of the screen), to update the probability using Bayes rule, pretty much as outlined in Eliezer's introduction to Bayes theorem. Herein lies a huge potential for fucking up: two features may be correlated. Here's the other fuck up: existence of my filter will change how the pictures will be and the prior probabilities. Making up prior probabilities from scratch, based on reason alone, fits well with philosophy of 'rationalism' (the one that's opposed to empiricism), and it is necessary to have a view that this is the only way. In scientific method the probability of data given theory can usually be inferred from the theory. For example, the theory is that the coin is unbiased, the data is sequence of 10 tails. The probability of data given theory is 1/1024 . The probability of the theory given data, on the other hand, requires making up a prior probability that the coin is unbiased p , and consider as the only alternative hypothesis that the coin lands tails up always, and then finding posterior probability by such reasoning: the fraction p of trials had the unbiased coin, and of them one in 1024 was what we seen. The total population of trials has portion p of which one in 1024 is what was seen, and portion 1-p of which one is one is what we seen. The fraction of unbiased coin that is consistent with what is seen is then (p/1024)/(p/1024 + 1-p) . If you are very sure (99.9999%) that the coin can't always land tail up, and must be unbiased, then (0.999999/1024)/(0.999999/1024 + 1-0.999999) = 0.998977 , i.e. the probability that coin is unbiased, after evidence, is 0.998977 . If you are assigning even odds that it is biased or unbiased, it is (0.5/1024) / (0.5/1024 + 0.5) = 0.00097561 , or about one in thousand. This has an interesting property that in real world situations any such a-priori assumptions that you can make up out of thin air can be gamed for fun and profit by some sort of scam scheme. Note the alternative reasoning: If I am to decide after 10 heads up that the coin can't be fair, then I'll assume that coin is not fair in one such trial in 1024 when the coin is fair. I can then consider how often I do coin trials and how bad it would be for me to believe that the coin is not fair when it is actually fair, and decide after how many trials I am going to assume that the coin is not fair. The latter scheme does not require to make up sensible prior probability that coin is not fair - having such probability would improve the calculations, but even without such probability one can find the range of losses of the strategy corresponding to the range of prior probabilities, and worst case losses can be assumed for a hostile environment (such as the one with many enough clever people who invariably believe their research is most important thing in the world, and want to trick you into paying them to do the research) Dmytry (talk) 10:04, 26 July 2012 (UTC)
Essayspace[edit]
WTF? Is the video supposed to be the content of the essay? If so, you don't understand what the Essay namespace is about. You can add the link to Sean Carroll's article and let someone delete the page.--ZooGuard (talk) 18:35, 16 July 2012 (UTC)
- Ok I added the link to the article, you can delete the page if you want to.--Baloney Detection (talk) 18:49, 16 July 2012 (UTC)
Relax, Max[edit]
Not everybody is as against LW as you are. That's okay. Personally, I agree. They're nothing but self-glorified science-fiction writers. But there's no need to get up in everyone's business about it. And please, try to keep it down to a minimum of pages.--User:Brxbrx/sig 21:00, 18 July 2012 (UTC)
- I agree that I might have gotten an unhealthy obsession by LW as of late. But it really bugs me when certain people can't see the obvious crankery that is LW. I guess it is because LW is more subtle about it and you have to digg a little, they are not openly wooish like Chopra. If these people had been young in the 60s and the 70s they would likely have been Randroids. Rand held "reason" (her version of it, at least) in high regard and praised science, yet if you digg a little deeper, she was a complete crackpot.--Baloney Detection (talk) 21:59, 19 July 2012 (UTC)
This is just too funny.[edit]
http://lesswrong.com/lw/dqz/a_marriage_ceremony_for_aspiring_rationalists/ Dmytry (talk) 10:08, 26 July 2012 (UTC)
- They have pics of it here: http://patrissimo.livejournal.com/1491074.html --Baloney Detection (talk) 10:09, 28 July 2012 (UTC)
LW[edit]
Howdy! Let me preface by saying that I don't mean to be hostile here; I haven't seen anything to suggest that you are concerned with anything other than a fair depiction of LessWrong. So please don't take this as an attack - I just want to head off a perceived problem right now, is all.
After I rewrote the lede to the article, I noticed you very quickly added in an additional criticism to it. While not strictly wrong, it seemed misplaced, and so I moved it down a bit. But I was planning on reworking a lot of the article, and you seem to be one of the most frequent editors (along with Gerard, but I'm not worried about him). As I mentioned on the talk page, it looks to me like most of the article is filled with these little additions. But I see two problems with them:
- Their inclusion throughout the article makes it feel cluttered and difficult to read: clear commentary is hidden by numerous small details.
- The sheer profusion and the triviality of many of the criticisms smacks of an attempt to exhaustively detail every failing.
So I was wondering if maybe you could (in the future) work to integrate some of these criticisms into the article as a whole, rather than inserting them as addendums or interjections. Further, I thought it might be good for you to take a step back and pause before adding some things. I don't want to cite specific examples (since then we'd just start arguing over relative merit) but I do notice that you seem a very enthusiastic critic of LW. There's nothing wrong with that, but until we have a WIGOLW (not yet warranted, thankfully!) it might be best to dial it back a little bit. There's a lot of craziness that warrants comment, but not so exhaustively.
Please tell me if that sounds reasonable. Thank you for your time.--talk 08:03, 10 November 2012 (UTC)
- I didn't know you was in a major revamp of the article for a longer time period. But sure, I'll refrain from adding new stuff for a while before talking it through the talk page, except possibly relevant links to the link list below if I happen to run into any (I think the link to Kruel is fitting for example).--Baloney Detection (talk) 12:31, 10 November 2012 (UTC)
I've got you a present[edit]
It's past the point where your discussions on Talk:LessWrong and Talk:Eliezer Yudkowsky can be considered to be relevant to improving the article. There's not much wrong with discussing a topic on its salient article talkpage, but you and your other have been seriously crowding the space, far more than most non-content related discussions normally do. You're filling up archive page after archive page, and making it more difficult to sort through content discussions. You are also annoying other editors. Thus, I've gotten you a present:
I would make a new WIGO, but that would just piss off the crowd that hates WIGOs and wants them off the wiki. But at least you have a forum, a place where you can talk about Eliezer Yudlowsky and his fanboys to your heart's content, without wrinkling anything. Please, try and focus new topics to the forum page I've magnanimously crafted for you.--"Shut up, Brx." 00:55, 2 January 2013 (UTC)
Yvain[edit]
Please describe on the talk page why you want to keep Yvain.--talk 21:30, 16 June 2013 (UTC)