User talk:ZackMartin/Zero and one as probabilities
Jump to navigation
Jump to search
Copied from Talk:Burden_of_proof#Uniform_prior
<nitpick>You can't have a prior of 0, since that would mean you could never update on evidence. (See here.) Also, the correct way to use probability assignments in Bayesian statistics is to treat the probability you assign to a statement as your level of belief in that statement, as Bob says.</nitpick> Tetronian you're clueless 15:26, 15 March 2011 (UTC)
- I can't see why one can't have priors of 0 or 1. There are some claims there is no possible evidence that would cause you to update upon them. e.g. "1+1=2", what is the prior probability do I assign to that? 1. "1+1=3", what prior probability do I assign to that? 0. OK, once "1+1=3" has 0 I can't update on any evidence to change my opinion of it - that's OK, I cannot possibly be wrong. (I don't find Yudkowsky's argument that there is possible evidence for 2+2=3 convincing.
- Also, let's say the law of the excluded middle has a probability of 1: P(p OR not(p))=1. And P(p OR q) = P(p) + P(q). Hence P(p) + P(not(p)) = 1. And hence P(p) = 1 - P(not(p)). Thus, a belief less than 0.5 in a proposition is a belief stronger than 0.5 in its negation.
- I don't understand the issue Yudkowsky has with probabilities of 0 or 1. Yes, you can't convert them into odds using real numbers - but, you can if you use the affinely or projectively extended real numbers. (Which are just as respectable as the reals.) So I really don't see the problem with probabilities of 0 or 1. Yes, it means you are at a fixed point where no further evidence can change your mind. And for P(Watermelon cures cancer)=1, that is not a good place to be. But for P(1+1=2)=1, that is an entirely reasonable place to be. --Maratrean (talk) 00:28, 16 March 2011 (UTC)
- Simply put, human reasoning is fallible. This includes a priori reasoning, as this merely consists of observing one's own logical deductions and using them as evidence - these deductions are often wrong. Thus, we can't justify being infinitely certain in anything. I think this anecdote illustrates the principle nicely. Also, note that in Bayesianism probability also represents your betting behavior; if you have a prior of 0, you would be willing to bet that a statement is wrong at infinity:1 odds. Thus, if you were wrong, you would lose an infinite amount of money. And most importantly, is there really ever any reason to be in a situation where you can't update on evidence even if it goes against your prior? Tetronian you're clueless 00:59, 16 March 2011 (UTC)
- Since you call yourself a 'subjective objective Bayesian', I think there are then two questions: (1) Is it possible for someone to have a belief with a probability of 0 or 1? (2) Is it rational for them to do so? The first question deals with the 'subjective' half, the second with the 'objective' half.
- As to the first question, there are certain beliefs about which very many people feel it is impossible that they are wrong. Of course, some people will think that it is impossible that they are wrong about controversial claims (e.g. "Jesus rose from the dead"). The very fact that other people disagree with them is evidence that they are mistaken in believing that it is impossible that they are wrong. But that they are mistaken to believe with such confidence, doesn't mean that they don't so believe with such confidence. It is irrational to assign a prior of 0 or 1 to the statement "Jesus rose from the dead", but it's not impossible to do so. So this seems to answer the 'subjective' half of the question in the affirmative.
- A more favourable example is claims like '1+1=2', 'The universe exists', 'I exist', 'Space exists', 'Time exists', 'Minds exist', 'If A implies B, and A, then B'. Very few people doubt these claims are true. Many people believe it is impossible that they are wrong in these beliefs. Are these assignments of probability 1 irrational? You can try to argue they are; but it isn't obvious they are. Prima facie, it seems entirely reasonable to believe that it is impossible that one is wrong in these claims. So that seems to answer the 'objective' half of the question in the affirmative too.
- You say, to have a probability of 1, you'd have to be willing to bet infinity on that proposition being true. But, very many people, myself included, are willing to bet an infinite amount of money on "1+1=2". I will bet all I own on its truth; I will borrow all I can borrow to bet on it; I am so certain that it is true I will bet anything and everything. Turning the question around - if you deny that P("1+1=2") = 1, there must be some limit to the amount of money you would be willing to bet on its truth. What is the limit? I don't think you can say what it is. And if you can't, that suggests you actually assign it a probability of 1.
- Not all arithmetic is as certain as "1+1=2". If I give you two six digit numbers to add up, you won't feel certain about the result. But, "1+1=2", how can you actually doubt that? Just because you can doubt larger sums, doesn't mean you actually can doubt "1+1=2".
- It will also vary somewhat from person to person. I have no doubt that "1+1=2", I have no doubt that "3+7=10" either. But, I am sure there was a point when I was a child, I was unsure about "3+7=10", but quite certain about "1+1=2". And maybe at some point I can't remember, even uncertain about "1+1=2" - although, if a child doesn't know that "1+1=2", is it that they don't know the truth of the proposition, or is it that they don't know the proposition itself, since they don't yet understand the symbols which are used to express it? If they don't even know the proposition, they can scarcely be said to have any degree of belief in it.
- If I assign "1+1=2" a prior of 1, I can't update it to anything else. That's fine with me -- I am 100% confident I'm not going to be updating my confidence in "1+1=2", not ever.
- If someone else assigns "Jesus rose from the dead" a prior of 1, they can't update it to anything else following Bayes rule. But they can update it by ignoring Bayes rule - when your priors themselves are irrational, violating Bayes rule may well be the most rational way to update yourself to a more rational situation.
- What is the probability of Bayes rule? If we are to follow Bayes rules in updating probabilities, how do we take into account P(Bayes rule) < 1? Don't we then need to reformulate Bayes rule, to take into account the non-zero probability it is false? But how could one possibly do so? And, even if one could come up with a "meta-Bayes rule" that took into account that P(Bayes rule) < 1, surely if P(Bayes rule) < 1, then P(meta-Bayes rule) < 1 also? So then we'd need a meta-Bayes rule, a meta-meta-Bayes rule - an infinite regress results. The way out of this morass is to say P(Bayes rule)=1. --(((Zack Martin)))™ 08:29, 17 March 2011 (UTC)
- Simply put, human reasoning is fallible. This includes a priori reasoning, as this merely consists of observing one's own logical deductions and using them as evidence - these deductions are often wrong. Thus, we can't justify being infinitely certain in anything. I think this anecdote illustrates the principle nicely. Also, note that in Bayesianism probability also represents your betting behavior; if you have a prior of 0, you would be willing to bet that a statement is wrong at infinity:1 odds. Thus, if you were wrong, you would lose an infinite amount of money. And most importantly, is there really ever any reason to be in a situation where you can't update on evidence even if it goes against your prior? Tetronian you're clueless 00:59, 16 March 2011 (UTC)