Talk:Burden of proof
Expand[edit]
Perhaps the definition could be expanded to include the introduction of "old" or traditional ideas into a scientific argument, such as the notion that a deity created the Earth in 7 days, or that an ancient text suggests that the Earth is 6,000 years old. PoorEd 11:05, 29 February 2008 (EST)
- I don't quite follow you there. They really couldn't be included - or is that your point?--Bobbing up 12:45, 29 February 2008 (EST)
- My point is that the burden of proof does not fall only on 'new ideas' as the article suggests. It falls equally on those who defend 'old ideas' with insufficient proof. PoorEd 12:57, 29 February 2008 (EST)
Uniform prior[edit]
I disagree with the burden of proof. If we have some proposition P, and no evidence for either P or ~P, how should we assign prior probabilities? The concept of "burden of proof" says that, if P is an "affirmative" statement, then choose priors P(P) ~ 1, P(~P) ~ 0. But surely, if we have no evidence, we should choose a uniform prior instead? i.e. P(P) = P(~P) = 0.5. In less formal terms, if we have no evidence for the existence of X, the correct response is not to disbelieve in the existence of X, but simply to suspend belief and believe neither, or believe both equally. But in a state of such suspension, the slightest evidence can cause us to update our probabilities one way or the other. --Maratrean (talk) 06:23, 13 March 2011 (UTC)
- This is the same mistake as thinking agnosticism means you believe God has a 50:50 chance of existing. So the The Dragon in My Garage has a 50% chance of existing? If you cannot generate any evidence, there is no reason to assume it's probability is any higher than 0. Indeed, you have to take this hypothetical affirmative statement (Sagan's dragon in this example) and combine it with every other possible affirmative statement on the subject. Is the invisible dragon actually a unicorn, for instance? So we have dragon, no dragon, and unicorn as possibilities that you want to assign your probabilities to. So with that logic above, you have P = 1/3 for each. What if the dragon is pink, not purple. We have four, so P = 0.25 for each. Someone else suggests there's actually a lion in the garage. P = 1/5. There are infinite assertive statements with equally few pieces of supporting evidence - and we don't need to state them for them to exist. So the probability for each being true, based on giving them an equal weighting causes them to tend towards 0 probability. The probability that it's not your arbitrary statement tends to 1. Therefore there is no need to consider something, without any supporting evidence, as anything other than effectively non-existent or effectively false. postate 07:01, 14 March 2011 (UTC)
- In short, when you say "believe everything equally", it would have to literally mean everything, which just isn't practical or even remotely sensible. postate 07:03, 14 March 2011 (UTC)
- On the contrary, where there are n mutually exclusive propositions pn, a uniform prior would imply P(pn) = 1/n for all n. But where there are only 2 mutually exclusive possibilities P(pn) = 0.5. So, there are a near infinity of possible things that could be in your garage, so to the extent those possibilities are mutually exclusive, we arrive at a prior nearing zero. But, consider the claim "There exists a first moment of time having no prior moment". Here, there are only two mutually exclusive possibilities - either there exists a first moment of time having no prior moment, or there doesn't - so we are justified in assigning a probability of 0.5 to each possibility. Likewise, take the claim "There exists an omnipotent being" - clearly, either there is or there isn't such a being, so we are justified in a prior of 0.5 for each. The claim "There is one single omnipotent being which is the Christian God", the prior will be much less, since there is vastly more than one other proposition mutually exclusive to this one (as well as no omnipotent being, and the Christian God being the only omnipotent being, there are many other possible omnipotent beings which might be instead.) Considering your claim about the dragon being in your garage, since there are countless other things that might be in the garage which would exclude the dragon being there, the prior for the dragon being there must be near zero. (On the other hand, suppose you claim that an infinitesimal unicorn existed in your garage - since its existence would not exclude the existence of anything else in your garage, its prior assigned uniformly must be nearer to 0.5.) --Maratrean (talk) 08:15, 14 March 2011 (UTC)
- What you are trying to say is that "we cannot actually guess a probability at all so I'm going to accurately guess a probability". That's an impressively circular non sequitur, saying that you're going to calculate a probability based on the fact you can't calculate a probability. Even when you have no supporting evidence for a proposition, you have a lack of evidence; and absence of evidence, until further notice, is evidence of absence. This is what the burden of proof means. We can't tell if the world is going to spontaneously end tomorrow, you just can't because information about the future is unavailable. But it doesn't follow that it will end with a 50:50 probability. Otherwise we arrive at a contradiction that it's incredibly unlikely that we'd be here today. If the information is not there, then you cannot work out the probability at all. You cannot conclude that it is therefore 50%. This fails on the grounds of basic skepticism whereby evidence is the be-all and end-all. If there is absolutely no real evidence, there is no reason for accepting something or even its possibility. postate 08:37, 14 March 2011 (UTC)
- "we cannot actually guess a probability at all so I'm going to accurately guess a probability" - no, it's simply Bayesianism. Which interpretation of probability do you adopt - the Bayesian or the frequentist (or something else)?
- "We can't tell if the world is going to spontaneously end tomorrow... But it doesn't follow that it will end with a 50:50 probability" - you misunderstand my reasoning. The proposition "world ends at time t" where t > now, would not be assigned a probability of 0.5 by a uniform prior, but rather a probability close to 0. This is because, there is an infinite continuum of possible future times t, with corresponding propositions "world ends at time t" - so rather than 2 mutually exclusive possibilities (which would generate a uniform prior of 0.5), we have an infinity of mutually exclusive possibilities, which produces a uniform prior of 0.
- I might suggest, by contrast "world ends at some future time t" vs. "world never ends" are mutually exclusive propositions, so a uniform prior would suggest each have a prior of 0.5. But then, every specific claim "world ends at some future time t" would have a probability of zero, but the infinite conjunction of all those claims would have a probability of 0.5.
- (An infinite conjunction of zero probability propositions can have a probability of one. Suppose I pick a random natural number m. Now, consider all the propositions pn = "m is n". P(pn) = 0. Yet P(infinite conjunction of pn for all natural numbers n) = 1. This makes sense when we realise, that P(p) = 0 does not mean that p is surely false, it merely means that p is almost surely false.) --Maratrean (talk) 08:57, 14 March 2011 (UTC)
- Why are you conflating this with the burden of proof? The entire point of the burden of proof is that you need to propose evidence. It has nothing to do with these Baysian probabilities because the only way your "the prior probability has to be 0.5" makes sense is without any supporting evidence or even an absence of evidence or even desire to think about evidence - which necessitates that there is no reason to believe something to be real. This is a quirk of meaningless statistics while the burden of proof is an application of skepticism in the real world. postate 09:06, 14 March 2011 (UTC)
- According to Bayesianism, probability is a measure of our degree of confidence in the belief in a proposition. So, "burden of proof", says that affirmative propositions lacking positive evidence should never be believed, i.e. always assigned a probability of 0. Whereas, rejecting the "burden of proof" is to disagree with the position that these propositions should always be assigned a prior probability of 0. So, Bayesian probability is definitely relevant here. If you think probability is irrelevant, maybe you are confusing Bayesian and frequentist notions of probability? --Maratrean (talk) 10:19, 14 March 2011 (UTC)
- The burden of proof has nothing to do with probability, Bayesian or otherwise. If person A makes a statement and wishes persons B, C, D, etc to act on it then the burden of proof rests on person A. In religious terms the burden of proof rests on those making religious statements because they imply that others must change their behaviour. The statement "there is a supreme being" implies that one's behaviour needs, somehow, to match the demands of that supposed supreme being. As such this statement demands that others change their behaviour. Hence the burden of proof rests on those making such a statement. Jack Hughes (talk) 10:30, 14 March 2011 (UTC)
- The statement "there is a supreme being" doesn't necessarily imply anyone must change their behaviour. Certainly, if there is a supreme being of the Christian sort - a god who cares about how we behave, and will punish us for behaving differently - if we believed such a deity existed, we would likely choose to behave so as to please it. But there could well be some kind of deist god that doesn't care what we do, and treats everyone equally regardless of what they do - believing in the existence of such a deity will not cause us to change our behaviour - except to the extent that belief itself is a type of behaviour - to believe in a proposition is to engage in a certain pattern of behaviour, such as verbally asserting the truth of propositions, etc. And, if belief is a type of behaviour, then Bayesian probability is relevant to how we behave, given that it is relevant to what we believe. --Maratrean (talk) 11:37, 14 March 2011 (UTC)
- The burden of proof has nothing to do with probability, Bayesian or otherwise. If person A makes a statement and wishes persons B, C, D, etc to act on it then the burden of proof rests on person A. In religious terms the burden of proof rests on those making religious statements because they imply that others must change their behaviour. The statement "there is a supreme being" implies that one's behaviour needs, somehow, to match the demands of that supposed supreme being. As such this statement demands that others change their behaviour. Hence the burden of proof rests on those making such a statement. Jack Hughes (talk) 10:30, 14 March 2011 (UTC)
- According to Bayesianism, probability is a measure of our degree of confidence in the belief in a proposition. So, "burden of proof", says that affirmative propositions lacking positive evidence should never be believed, i.e. always assigned a probability of 0. Whereas, rejecting the "burden of proof" is to disagree with the position that these propositions should always be assigned a prior probability of 0. So, Bayesian probability is definitely relevant here. If you think probability is irrelevant, maybe you are confusing Bayesian and frequentist notions of probability? --Maratrean (talk) 10:19, 14 March 2011 (UTC)
- Why are you conflating this with the burden of proof? The entire point of the burden of proof is that you need to propose evidence. It has nothing to do with these Baysian probabilities because the only way your "the prior probability has to be 0.5" makes sense is without any supporting evidence or even an absence of evidence or even desire to think about evidence - which necessitates that there is no reason to believe something to be real. This is a quirk of meaningless statistics while the burden of proof is an application of skepticism in the real world. postate 09:06, 14 March 2011 (UTC)
- What you are trying to say is that "we cannot actually guess a probability at all so I'm going to accurately guess a probability". That's an impressively circular non sequitur, saying that you're going to calculate a probability based on the fact you can't calculate a probability. Even when you have no supporting evidence for a proposition, you have a lack of evidence; and absence of evidence, until further notice, is evidence of absence. This is what the burden of proof means. We can't tell if the world is going to spontaneously end tomorrow, you just can't because information about the future is unavailable. But it doesn't follow that it will end with a 50:50 probability. Otherwise we arrive at a contradiction that it's incredibly unlikely that we'd be here today. If the information is not there, then you cannot work out the probability at all. You cannot conclude that it is therefore 50%. This fails on the grounds of basic skepticism whereby evidence is the be-all and end-all. If there is absolutely no real evidence, there is no reason for accepting something or even its possibility. postate 08:37, 14 March 2011 (UTC)
- On the contrary, where there are n mutually exclusive propositions pn, a uniform prior would imply P(pn) = 1/n for all n. But where there are only 2 mutually exclusive possibilities P(pn) = 0.5. So, there are a near infinity of possible things that could be in your garage, so to the extent those possibilities are mutually exclusive, we arrive at a prior nearing zero. But, consider the claim "There exists a first moment of time having no prior moment". Here, there are only two mutually exclusive possibilities - either there exists a first moment of time having no prior moment, or there doesn't - so we are justified in assigning a probability of 0.5 to each possibility. Likewise, take the claim "There exists an omnipotent being" - clearly, either there is or there isn't such a being, so we are justified in a prior of 0.5 for each. The claim "There is one single omnipotent being which is the Christian God", the prior will be much less, since there is vastly more than one other proposition mutually exclusive to this one (as well as no omnipotent being, and the Christian God being the only omnipotent being, there are many other possible omnipotent beings which might be instead.) Considering your claim about the dragon being in your garage, since there are countless other things that might be in the garage which would exclude the dragon being there, the prior for the dragon being there must be near zero. (On the other hand, suppose you claim that an infinitesimal unicorn existed in your garage - since its existence would not exclude the existence of anything else in your garage, its prior assigned uniformly must be nearer to 0.5.) --Maratrean (talk) 08:15, 14 March 2011 (UTC)
- In short, when you say "believe everything equally", it would have to literally mean everything, which just isn't practical or even remotely sensible. postate 07:03, 14 March 2011 (UTC)
Maratrean claims (I think) that something for which we have no evidence either way should be given a probability around (presumably of) 0.5, and he gives the example of an "infinitesimal unicorn" in the garage. (One might go to argue that the infinitesimal unicorn sends me telepathic messages about what to have for breakfast which could only be received by me, but I don't know if Maratrean would keep the probability at 0.5).
Skeptics, on the other hand, would claim that the probability should be around zero and would only come up from that level if evidence of the unicorn were presented. In such a case negative evidence would be pretty impossible to find but perhaps something ambiguous could get a belief level of 0.01. For the skeptic this is still in the range of the highly improbable.
But if you start with a level of belief at 0.5 then adding 0.01 takes you up to 0.51 - you are in belief territory. Starting at 0.5 means that the slightest evidence pushes you into belief and - as negative evidence is hard to come by - you are never really going to get below it.
Now, I could be misinterpreting what Maratrean means by his 0.5 and in that case I apologise, but to me it seems like a recipie for believing any wierd thing you want.--BobSpring is sprung! 11:54, 14 March 2011 (UTC)
- This is about burden of proof. If Maratream, or anyone else, comes to me and tells me his garage is full of invisible unicorns then he's asking me to change my worldview to one where invisible unicorns exist. As such, and I'll bang on and on about this, we're not talking about possibilities but changes in behaviour. He's the one asking me to change so it's up to him to prove that I should change my beliefs to one where unicorns exist - so far I'm living in a world where they don't and my behaviour reflects this. It doesn't matter if they're deist unicorns, asking me to believe in them is still asking me to change and I'll need proof before I do that. As such the burden of proof lies on the one proposing the change (unicorns) and is not some sort of 50/50 proposition. Jack Hughes (talk) 14:07, 14 March 2011 (UTC)
Bob, to me, it is 50-50 whether there are ghostly invisible unicorns in your garage, and 50-50 whether they communicate with you telepathically. Let us say I have no information either way so I take no conclusions. You see 50/50 as meaning that the slightest evidence will push you one way rather than the other, so an extra 0.01% of evidence pushes you into belief territory. Yet, a 50.01:49.99 belief is a very weak belief. It's not like that 0.01 pushes you from 50% to 100%, it only pushes you from 50.00% to 50.01%. --Maratrean (talk) 08:19, 15 March 2011 (UTC)
- I'd say that 0.01 is very weak belief (or really no belief) and 0.51 is more inclined to believe than disbelieve in the invisible telepathic unicorn. As the unicorn is undetectable there can be no negative evidence and you can never get below .5. Weak evidence would be testimonials, strange noises in the garage weird behaviour by my dog or whatever. You can always dream up get some sort of poor evidence. With your starting at 50% you have to be inclined to believe in everything.--BobSpring is sprung! 10:53, 15 March 2011 (UTC)
- 0.01 is not a "very weak belief" or "really no belief". It is a strong belief in the negation of the proposition. P(not(X)) = 1 - P(X). So a 0.01 belief in "invisible unicorns exist" is a 0.99 belief in "invisible unicorns don't exist". Whereas, a real "no belief" - i.e. neither accepting nor rejecting the claim, but simply taking no position - would be 0.5.
- Just because a belief is at 0.51 instead of 0.50, doesn't mean we need necessarily act any differently. In many cases, we will act exactly the same in either case. Given that whether invisible unicorns are largely irrelevant to our own existence, whether one believes in them makes very little difference to what one does. The only difference I can see is in whether they express their belief to others. People with weak beliefs (i.e. only slightly more or slightly less than 0.50) tend to express those beliefs hesitatingly, with qualification. Given this belief's lack of salience (very few circumstances would be relevant to bring it up), and potential negative social consequences of expressing it, one would expect this hesitation to be present to an additional degree. So a person with a 0.51 belief is unlikely to even say anything differently from what an 0.50 belief person would.
- It is true there is a lack of possible evidence against invisible unicorns; but then again, there is a lack of evidence for it too. Strange noises at night have many other explanations, so without a reason to prefer the invisible unicorn explanation over those other possible explanations, those noises should make no difference to our degree of belief in invisible unicorns.
- But I think a particularly impressive instance of divine revelation, backed up by amazing miracles, might constitute evidence for the existence or non-existence of invisible unicorns. For example, if the stars rearranged themselves in the sky suddenly to say "the omniscient god declares that invisible unicorns do not exist", and this was observed by everyone on earth, that would be evidence that invisible unicorns do not exist. (Not absolute evidence though - for maybe it is just super-advanced aliens pulling our leg?) --Maratrean (talk) 12:34, 15 March 2011 (UTC)
- Take alien visitations. We've got a lot more evidence for them than we do for invisible unicorns or for gods. It's not particularly good evidence in my opinion but it exists. Eye witnesses and grainy photos. Let's say that I give it a convincing rating of 0.25 - I still don't believe it but I acknowledge that the is some evidence.
- But if you start from 0.5 then this takes you up to .75 - which surely is pretty close to belief. Again, finding evidence against the claim that aliens have ever visited would be pretty close to impossible.
- However it seems that we're using the numbers differently. You seem to feel that anything below 0.5 is "negative belief" or something like that. For me it's simply lack of belief. If I have a value of 0.0 that's my base point, the default from which I must move. Without evidence there is no reason to accept anything invisible telepathic unicorns, gods or aliens.--BobSpring is sprung! 14:27, 15 March 2011 (UTC)
- Removed the discussion about priors of 0 and 1 to User_talk:Maratrean/Zero_and_one_as_probabilities - I want to continue this conversation with Tetronian, but its getting very off the topic of this page. --(((Zack Martin)))™ 08:32, 17 March 2011 (UTC)
- I'd say that 0.01 is very weak belief (or really no belief) and 0.51 is more inclined to believe than disbelieve in the invisible telepathic unicorn. As the unicorn is undetectable there can be no negative evidence and you can never get below .5. Weak evidence would be testimonials, strange noises in the garage weird behaviour by my dog or whatever. You can always dream up get some sort of poor evidence. With your starting at 50% you have to be inclined to believe in everything.--BobSpring is sprung! 10:53, 15 March 2011 (UTC)
Why the focus on priors? It seems to me the whole concept of burden of proof has more to do with the creation of new posterior probabilities. Or in other words, what evidence has to be produced to shift the priors. For the affirmative claim to become more likely affirmative evidence must be presented, for the person rejecting the affirmative claim no evidence emerging, or the status quo, increases the likelihood of that probability. Tmtoulouse (talk) 19:17, 15 March 2011 (UTC)
- I agree with all of that except for this line: "for the person rejecting the affirmative claim no evidence emerging, or the status quo, increases the likelihood of that probability." This assumes that true hypotheses will accumulate evidence in favor of them at a predictable rate; we would lower the posterior of hypotheses that failed to provide evidence at this rate and raise the posterior of hypotheses that were able to provide evidence at that rate. I do think this is a useful heuristic, but it isn't true in all cases and it can be very hard to guess what the correct rate of evidence-production should be. Take physics as an example - we often have to wait years before we even have an experiment that we can use to test a hypothesis. Yet since our ability to run experiments is so limited, we can't update on the hypothesis because, even though we have yet to produce evidence in favor of it, we have yet to produce any tests of the hypothesis at all. Tetronian you're clueless 20:43, 15 March 2011 (UTC)
- Absence of evidence is not evidence of absence. Take the claim "Flying pigs exist somewhere and somewhen in the universe". We don't know - the universe is so big, the area of our awareness so small. Now, if we see a flying pig tomorrow, we have evidence for the truth of the proposition. If we don't see one tomorrow, that is not evidence it doesn't exist, simply that it doesn't exist on that day in the area of our awareness. Suppose no one on Earth in the next thousand years sees a flying pig. We now have evidence they don't exist on Earth during those thousand years. But, maybe they existed on some planet in the Andromeda galaxy, but that planet blew up one hundred million years ago? We have no evidence either way. So, we start with probability near 0.5. Seeing a flying pig would cause us to update its probability to a significant degree in the 1.0 direction. Not seeing a flying pig - well, we've proved they don't exist in an almost infinitesimally small region of spacetime, but no nothing about the remainder, so we might update our probability downwards, but by such a small margin its still essentially 0.5.
- This suggests the "posterior" burden of proof is in favour of the somewhere/somewhen existence flying pig - the amount of evidence required to update its probability above 0.5 is much less than that required to update its probability below 0.5.
- Of course, we might have some indirect evidence that flying pigs never exist. We know they didn't evolve on Earth - but we don't know enough about biology or evolution to say whether they might have evolved elsewhere in the universe. Maybe, if we could study many separate processes of evolution across many planets, maybe we might discover there is something inherently impossible about flying pigs ever evolving. But this isn't mere positive/negative observation any more, its something more complicated. Still, we need a lot of evidence (detailed study of evolution across many planets) to reduce the probability below 0.5 this way, and rather less (someone gives us a flying pig in a cage) to increase the probability above 0.5. --Maratrean (talk) 00:49, 16 March 2011 (UTC)
- In general, absence of evidence is evidence of absence, just not in contrived cases like this one. When talking about flying pigs, for example, it is usually implied that this means on Earth. For most things, like "homeopathy works," we can devise tests that can significantly effect the prior. Related: Trent's essay and this one. Tetronian you're clueless 00:54, 16 March 2011 (UTC)
- The Yudowsky link you gave says "failure to observe E decreases the probability of H". True, but it could be by an arbitrary small amount. Failure to observe E could decrease the probability of H by only 10-100. And it's not necessarily symmetric either - an observation of E could result in an increment in probability of H many orders of magnitude larger than the decrement in the probability of H that would result from not observing E. Observing a flying pig greatly increases P(flying pigs exist somewhere/somewhen in the universe). Not observing a flying pig only decreases the probability by an imperceptible amount, since the amount of spacetime we can observe to the requisite level of detail is so small, failing to observe something in that tiny area tells us very little about what might or might not exist in the far greater area outside it we can't observe. --(((Zack Martin)))™ 08:47, 17 March 2011 (UTC)
- In general, absence of evidence is evidence of absence, just not in contrived cases like this one. When talking about flying pigs, for example, it is usually implied that this means on Earth. For most things, like "homeopathy works," we can devise tests that can significantly effect the prior. Related: Trent's essay and this one. Tetronian you're clueless 00:54, 16 March 2011 (UTC)
Edit point[edit]
Jack, the issue is that you have already adopted a worldview in which invisible unicorns are impossible, so you assign them a probability prior to 0. That's fine, but that's not really a neutral approach - that's an approach of "I have my worldview - I am not going to change it strong evidence". The thing is, a fundamentalist Christian can say the same: if anyone comes to them and says, Jesus didn't rise from the dead, then you're asking him to change his worldview of fundamentalist Christianity. You're the one asking him to change his beliefs that Jesus rose from the dead - to change his behaviour. So far he's living in a world where Jesus is his Lord and Saviour. The point is, your argument to put the burden your way can be used equally well by others to put the burden their way. Isn't better just to leave the burden in the middle? --Maratrean (talk) 08:19, 15 March 2011 (UTC)
- That argument would only hold if we didn't have any information about the world we live in. Since we do have it, not all worldviews are equally reasonable. For example, people who die have a notable tendency to stay dead, which already constitutes evidence of a sort. If the fundamentalist accepts that proposition and still argues that Jesus was the one case in which that rule does not apply, the burden is on him to supply additional evidence for this event that would break a well-known rule. Taking the position that there's a zero prior probability for the existence of invisible unicorns is also entirely reaonable in a world that has so far lacked even the slightest hint about them. Röstigraben (talk) 09:05, 15 March 2011 (UTC)
- Röstigraben, what you are talking about is stepping beyond the burden of proof. The burden of proof is where we start at before we have any evidence. So, if we start with knowing nothing, we should assign the proposition "Jesus rose from the dead" a probability of 0.50. Then, we can update our probabilities with various evidences, including your evidence of the observed uniformity of the dead not rising - so we would use that evidence to update the probability of Jesus ressurected downwards. I am simply saying, we should start from something like 0.5:0.5, whereas the "burden of proof" suggests our starting point should be nearer to 0.0:1.0. What evidence we consider later, is a different issue. But I think, rather than accepting the neutral starting ground of 0.5:0.5, the Christian will choose something like 1:0, and the atheist something like 0:1 - before we even get to the consideration of evidence. --Maratrean (talk) 12:39, 15 March 2011 (UTC)
- The burden of proof is where we start at before we have any evidence. - total and complete and absolute bollocks! We always start with evidence - we do not exist in a vacuum. If someone comes with a statement that goes against the current evidence - that there are invisible unicorns in the garage, or that some guy rose from the dead - then the fact that their statement goes against the current evidence means that they burden of proof is on them. Jack Hughes (talk) 12:52, 15 March 2011 (UTC)
- The problem with that view is that truly knowing nothingabout a given proposition is a mostly hypothetical situation, while the burden of proof has a high practical applicability. It's not true that it's completely divorced from available evidence, because the plausibility of a claim is also relevant to the burden - not on whom it falls, but the degree to which it must be met. This plausibility can in turn only be assessed on the basis of prior knowledge, so I don't see how these "pure" 0.5 priors have any applicability here. Your example of the fundamentalist doesn't exactly match that condidtion either, because the believer arrived at his stance via his "knowledge" about Jesus' immortality, while the atheist will resort to the well-known pattern of dead people staying dead. That's what brought them to their 1.0 respective 0.0 positions, or at least very close to them. You're trying to shoehorn the concept of the burden of proof into a hypothetical first step in Bayesian reasoning, while anyone actually utilizing it will invariably act on available knowledge. As Armondikov said above, absence of evidence should be treated as evidence of absence, hence the very low practical prior whenever implausible claims are being considered. Röstigraben (talk) 13:17, 15 March 2011 (UTC)
- Rosti: you're absolutely correct that we should use background information, but that doesn't mean the prior we use is different - it just means that we update the prior immediately after we assign it. Even so, I think this is still the most important reason why we don't just have probability assignments of 1/2 for everything: as you point out, most things have real-world consequences that we can use to update our estimate. Tetronian you're clueless 13:28, 15 March 2011 (UTC)
- The problem with that view is that truly knowing nothingabout a given proposition is a mostly hypothetical situation, while the burden of proof has a high practical applicability. It's not true that it's completely divorced from available evidence, because the plausibility of a claim is also relevant to the burden - not on whom it falls, but the degree to which it must be met. This plausibility can in turn only be assessed on the basis of prior knowledge, so I don't see how these "pure" 0.5 priors have any applicability here. Your example of the fundamentalist doesn't exactly match that condidtion either, because the believer arrived at his stance via his "knowledge" about Jesus' immortality, while the atheist will resort to the well-known pattern of dead people staying dead. That's what brought them to their 1.0 respective 0.0 positions, or at least very close to them. You're trying to shoehorn the concept of the burden of proof into a hypothetical first step in Bayesian reasoning, while anyone actually utilizing it will invariably act on available knowledge. As Armondikov said above, absence of evidence should be treated as evidence of absence, hence the very low practical prior whenever implausible claims are being considered. Röstigraben (talk) 13:17, 15 March 2011 (UTC)
"We always start with evidence" - No. Consider an newborn infant - what does it think of the proposition "Jesus rose from the dead"? Nothing. It doesn't know what death is, it doesn't know what Jesus is, much less what it means to "rise from the dead". Then as it grows up it starts acquiring evidence. It learns what death is - family members die, watching TV, reading books, etc. It learns what "Jesus" is. It started out with literally no opinion on the topic (which I would call a 0.5 prior), and then updated with evidence. Of course, different infants will update with different evidence. The child raised by Christian parents soon learns that "Jesus rose from the dead", and likely believes it because their parents told them. (For young children, believing whatever their parents say is entirely rational.) The child not so raised is going to update differently. And so on we go through life. So we always start without evidence. (The exception is possibly some very basic propositions, like "1+1=2", whose truth may well be embedded in the structure of the child's brain, in such a way that while they will learn the meaning of the words and symbols with which the proposition is expressed, they never learn the proposition itself. This is similiar to Chomsky's idea that children are already born with detailed knowledge of the universals of language, and learning a particular language is just slotting the details of the particular language into that innate structure.)
We can't go back to being infants - but what we can do is bracket our knowledge of the evidence. Put the evidence to one side for the moment, and pretend we didn't have any at all. Where are we? Then, successively, reintroduce the evidence we put aside earlier into our consideration. In a sense, we are 'replaying' what happened in the formation of our beliefs as we grew up, although of course the details of the process are going to be very different, as it might have been had we been born with our full adult faculties. So this is a sort of logical/rational practice, which I think can be useful. (It is somewhat like the Cartesian method of doubt.) --Maratrean (talk) 01:05, 16 March 2011 (UTC)
Occam's Razor[edit]
Doesn't this have some application here?:
- "... is a principle that generally recommends selecting the competing hypothesis that makes the fewest new assumptions, when the hypotheses are equal in other respects.[2] For instance, they must both sufficiently explain available data in the first place." (WP)
Surely the dragon/unicorn requires most new assumptions given available data, so should be assigned a lower probability. (I'm no logician but common sense takes me that way) 14:17, 14 March 2011 (UTC)
- The problem with Ockham's razor is it seems prima facie correct, but trying to nail it down precisely is notoriously difficult. The difficulty in pinning it down is what makes me doubt its correctness. --Maratrean (talk) 08:21, 15 March 2011 (UTC)
- SusanG is correct - that's exactly what we should do. Maratrean, I think the problem you're having is that you think the idea of burden of proof contradicts the rules for assigning prior probabilities. But as others have pointed out above, burden of proof isn't really about individual rationality, it's a social guideline for discussing evidence. It is indeed wrong if we apply it to individual rationality, but that isn't its intended use.
- Furthermore, what burden of proof is really saying is, "if I have a low value of p(H) and you have a high value of p(H), I'm not going to raise my estimate unless you provide me with some additional evidence. If you want me to believe H, the burden to provide evidence is on you because my p(H) is so low that searching for evidence would be a waste of time when there are other beliefs I should be testing first." In other words, burden of proof is a way of accounting for the limitations of evidence-gathering in social situations, not a rule of individual rationality. It's a bit shady, I'll agree, but not completely wrong. Tetronian you're clueless 12:54, 15 March 2011 (UTC)
- Both Occam's Razor & the extraordinary claims dictum have bearing here. Consider the following: a guy you've just met starts telling you about his brother. He tells you that his brother's name is Peter. Then he says that Peter is a secret agent. Finally he says that his brother Peter is immortal & has magical powers. In each case the burden of proof is on the person making these statements, as you have no certain knowledge of whether they are true or false. However, the burden of proof becomes so much greater as the claims get more implausible. Although you do not know the speaker's brother, there is no particular reason to doubt that he has a brother named Peter, since it is very common to have a sibling & this is a very common name, + there is no particular reason in the context why the speaker should be making this up. So while there is a burden of proof if we wanted to establish for certain that his brother's name is Peter, we would probably just accept it without evidence. When he tells you that Peter is a secret agent, it is wise to be a little sceptical: although we know that secret agents exist, there are few of them & it is rare to meet one. We may want to hear a little more information or see some evidence before we could believe this statement. When he says that Peter is immortal & has magical powers, we would tend not to believe the statement at all, since it goes against our understanding of human abilities within the real world. The burden of proof is huge, and we would need some very reliable evidence to demonstrate Peter's immortality & magical abilities before it could be accepted. ₩€₳$€£ΘĪÐMethinks it is a Weasel 13:58, 15 March 2011 (UTC)
- I don't think burden of proof really applies here, since you are talking about what one person ("you") should believe. It doesn't make sense to say, "Well, I won't believe because there is a burden on Peter to show me evidence." What you really should think is, "I think it's unlikely that Peter is a secret agent, and while I will look for evidence as best I can, I have yet to see any evidence that would convince me of the affirmative." See the difference? The former statement has to do with a social situation (a debate or an arguement), the latter doesn't and is about individual rationality. But the idea of "extraordinary evidence", which is really just saying, "if I have a low prior probability I need a high likelihood ratio in order to have a high posterior probability" does apply. Tetronian you're clueless 15:22, 15 March 2011 (UTC)
- I think that Occam's Razor is really another issue. If I remember it correctly it originally was "Entities should not be multiplied unnecessarily. Which is then extended to mean - "Make your explanations as simple as possible."
- But it doesn't always work. Let's say that I'm a Martian who has come to live on Earth and somebody sends me a letter. The simplest explanation, and also the one with least entities, is that the person who sent the letter came round to my house and put it in my letter box. In fact, the real explanation is a lot more complex than Occam's Razor would have us believe.
- So it's just a suggestion about how to form hypothesis. It's not a way of being "right".--BobSpring is sprung! 15:45, 15 March 2011 (UTC)
- I'm not so sure, since statisticians have formalized Occam's Razor so that we can use it when doing statistical inference (that way we also avoid any confusion coming from the limitations of English, e.g. the shortest sentence is not always the simplest when we break it down mathematically). If we have competing explanations for, say, as series of coinflips, we might penalize one for complexity and change our probability assignments for both statements. As a result, we are justified in favoring one hypothesis over another because of complexity, making our probability assignments different from Maratrean's proposed uniform prior (which, as I understand it, was Susan's argument). Highly related: this. Tetronian you're clueless 15:51, 15 March 2011 (UTC)
- Exactly Tetronian! <aside> WTF is (s)he talking about? Do wish these folk talked English.</aside> Sorry, but I'm out of my depth here. 20:02, 15 March 2011 (UTC)
- No you're not, you hit the nail on the head perfectly! Tetronian you're clueless 20:36, 15 March 2011 (UTC)
- Exactly Tetronian! <aside> WTF is (s)he talking about? Do wish these folk talked English.</aside> Sorry, but I'm out of my depth here. 20:02, 15 March 2011 (UTC)
- I'm not so sure, since statisticians have formalized Occam's Razor so that we can use it when doing statistical inference (that way we also avoid any confusion coming from the limitations of English, e.g. the shortest sentence is not always the simplest when we break it down mathematically). If we have competing explanations for, say, as series of coinflips, we might penalize one for complexity and change our probability assignments for both statements. As a result, we are justified in favoring one hypothesis over another because of complexity, making our probability assignments different from Maratrean's proposed uniform prior (which, as I understand it, was Susan's argument). Highly related: this. Tetronian you're clueless 15:51, 15 March 2011 (UTC)
- I don't think burden of proof really applies here, since you are talking about what one person ("you") should believe. It doesn't make sense to say, "Well, I won't believe because there is a burden on Peter to show me evidence." What you really should think is, "I think it's unlikely that Peter is a secret agent, and while I will look for evidence as best I can, I have yet to see any evidence that would convince me of the affirmative." See the difference? The former statement has to do with a social situation (a debate or an arguement), the latter doesn't and is about individual rationality. But the idea of "extraordinary evidence", which is really just saying, "if I have a low prior probability I need a high likelihood ratio in order to have a high posterior probability" does apply. Tetronian you're clueless 15:22, 15 March 2011 (UTC)
- Both Occam's Razor & the extraordinary claims dictum have bearing here. Consider the following: a guy you've just met starts telling you about his brother. He tells you that his brother's name is Peter. Then he says that Peter is a secret agent. Finally he says that his brother Peter is immortal & has magical powers. In each case the burden of proof is on the person making these statements, as you have no certain knowledge of whether they are true or false. However, the burden of proof becomes so much greater as the claims get more implausible. Although you do not know the speaker's brother, there is no particular reason to doubt that he has a brother named Peter, since it is very common to have a sibling & this is a very common name, + there is no particular reason in the context why the speaker should be making this up. So while there is a burden of proof if we wanted to establish for certain that his brother's name is Peter, we would probably just accept it without evidence. When he tells you that Peter is a secret agent, it is wise to be a little sceptical: although we know that secret agents exist, there are few of them & it is rare to meet one. We may want to hear a little more information or see some evidence before we could believe this statement. When he says that Peter is immortal & has magical powers, we would tend not to believe the statement at all, since it goes against our understanding of human abilities within the real world. The burden of proof is huge, and we would need some very reliable evidence to demonstrate Peter's immortality & magical abilities before it could be accepted. ₩€₳$€£ΘĪÐMethinks it is a Weasel 13:58, 15 March 2011 (UTC)
Tetronian, the problem with mere social guidelines (as opposed to putative rules of logic) is they clearly are culturally-specific, they can and do change. And I can be a social change agent, arguing that we should drop this guideline, and the sky will not fall in if we do so, and even we might be better off if we did. For social guidelines are justified pragmatically or ethically. Whereas, rules of logic, laws of thought, claim as their justification something transcultural, something deeper than mere pragmatism. --Maratrean (talk) 01:11, 16 March 2011 (UTC)
- We can rate social guidelines in terms of how much they contribute to epistemic rationality. Burden of proof does a reasonably good job of keeping agents rational in social situations; it might be possible to formulate a better rule, but it's definitely possible to formulate worse ones, i.e. "believe whatever anyone tells you." For the most part, though, burden of proof is pretty much irrelevant to individual rationality. Thus, I don't see too much of a problem with it except when it's used incorrectly (e.g. to make statements about individual rationality or to rationalize ideas and protect them from conflicting evidence). Tetronian you're clueless 01:16, 16 March 2011 (UTC)
It's not about probabilities[edit]
From the WP article -
“”When debating any issue, there is an implicit burden of proof on the person asserting a claim.
|
There is nothing here as to whether the claim is probable or improbable. Whether I claim the garage is full to bursting with invisible unicorns or that the sky is blue it is still up to me, as the one making the claim, to provide the proof of my assertion.
Again from WP
“”the evidential standard required for a given claim is determined by convention or community standards, with regard to the context of the claim in question
|
Or, as it's been so eloquently put, extraordinary claims require extraordinary proofs. Jack Hughes (talk) 14:06, 15 March 2011 (UTC)
- See my comment above: the burden of proof lies with the asserter, but the extent of burden (or rather the extent of the proof required) depends of the probability/plausibility of the assertion. WěǎšěǐǒǐďMethinks it is a Weasel 14:10, 15 March 2011 (UTC)
- The key phrase is "when debating any issue". Burden of proof is about how we should handle evidence in social situations, not about individual rationality or assigning probabilities. So Maratrean isn't completely wrong, but he (she?) is barking up the wrong tree. Tetronian you're clueless 15:18, 15 March 2011 (UTC)
- Rubbish. Burden of proof applies to any situation where an unproven assertion is made, be it social, legal, philosophical or scientific. ΨΣΔξΣΓΩΙÐMethinks it is a Weasel 15:58, 15 March 2011 (UTC)
- Yes, but only insofar as you are talking about "asserters" (people). If you rephrased a particular situation in terms of just one person and what they should rationally believe, the idea of burden of proof wouldn't be there. This is because a "burden" is an obligation placed on the speaker, but when you rephrase the situation to exclude the speaker, the idea of a burden doesn't make sense anymore. Tetronian you're clueless 16:27, 15 March 2011 (UTC)
- Disagree. If you rephrased a particular situation in terms of just one person and what they should rationally believe, then they are the asserter & there is still a burden of proof. A person may not actually be obliged to justify their beliefs to themself with evidence, but if they do not, they cannot be said to have a rational basis for their beliefs. So when it is a question of just one person and what they should rationally believe, the burden of proof is very much still present. WéáśéĺóíďMethinks it is a Weasel 18:02, 15 March 2011 (UTC)
- No, the one person would be you, the observer. It would sound something like this: "I have a prior probability p(H) about the hypothesis H. In order to update my level of belief in H, I would need to use Bayes' Theorem, which requires that I have my prior probability p(H) and the details about some event E, which will be evidence that will either raise or lower p(H). If I don't obverse any event E, then I can't update p(H)." See how there is no mention of a "burden" here? Tetronian you're clueless 18:28, 15 March 2011 (UTC)
- But you can update P(H) assuming that not observing E is evidence against what ever was proposed that should cause E. For example, the priors for the claim that "thee aliens come down every Tuesday at 2am to over water my cactus garden" will certainly update if no aliens appear. This too me is the heart of burden of proof, for the priors to shift in support of the affirmative claim affirmative evidence must be sought, whereas the person denying the affirmative claim is supported by no evidence being found. Tmtoulouse (talk) 19:38, 15 March 2011 (UTC)
- My bad: By "E does not exist" I didn't mean not-E, I meant that there is no such event whose occurrence or non-occurrence could be used as evidence. Not observing E would of course be evidence against H in the case that you describe, but the case where we don't even have a test of H was what I was talking about. Tetronian you're clueless 20:35, 15 March 2011 (UTC)
- But you can update P(H) assuming that not observing E is evidence against what ever was proposed that should cause E. For example, the priors for the claim that "thee aliens come down every Tuesday at 2am to over water my cactus garden" will certainly update if no aliens appear. This too me is the heart of burden of proof, for the priors to shift in support of the affirmative claim affirmative evidence must be sought, whereas the person denying the affirmative claim is supported by no evidence being found. Tmtoulouse (talk) 19:38, 15 March 2011 (UTC)
- No, the one person would be you, the observer. It would sound something like this: "I have a prior probability p(H) about the hypothesis H. In order to update my level of belief in H, I would need to use Bayes' Theorem, which requires that I have my prior probability p(H) and the details about some event E, which will be evidence that will either raise or lower p(H). If I don't obverse any event E, then I can't update p(H)." See how there is no mention of a "burden" here? Tetronian you're clueless 18:28, 15 March 2011 (UTC)
- Disagree. If you rephrased a particular situation in terms of just one person and what they should rationally believe, then they are the asserter & there is still a burden of proof. A person may not actually be obliged to justify their beliefs to themself with evidence, but if they do not, they cannot be said to have a rational basis for their beliefs. So when it is a question of just one person and what they should rationally believe, the burden of proof is very much still present. WéáśéĺóíďMethinks it is a Weasel 18:02, 15 March 2011 (UTC)
- Yes, but only insofar as you are talking about "asserters" (people). If you rephrased a particular situation in terms of just one person and what they should rationally believe, the idea of burden of proof wouldn't be there. This is because a "burden" is an obligation placed on the speaker, but when you rephrase the situation to exclude the speaker, the idea of a burden doesn't make sense anymore. Tetronian you're clueless 16:27, 15 March 2011 (UTC)
- Rubbish. Burden of proof applies to any situation where an unproven assertion is made, be it social, legal, philosophical or scientific. ΨΣΔξΣΓΩΙÐMethinks it is a Weasel 15:58, 15 March 2011 (UTC)
- The key phrase is "when debating any issue". Burden of proof is about how we should handle evidence in social situations, not about individual rationality or assigning probabilities. So Maratrean isn't completely wrong, but he (she?) is barking up the wrong tree. Tetronian you're clueless 15:18, 15 March 2011 (UTC)
Jack, "There is nothing here as to whether the claim is probable or improbable". According to Bayesianism, all beliefs have probability. (Whereas, according to frequentism, not all do.) You seem to be assuming a frequentist definition of probability, is fine; but the frequentist one isn't the one being discussed here, its the Bayesian one. --Maratrean (talk) 01:12, 16 March 2011 (UTC)
- Bloody Bayesians want to see everything in terms of probability. Of course all things have probabilities, that is not the point here, we're talking about the burden of proof, not likelihood. Let's look at this from a legal point of view. If the police actually watch someone murder another in cold blood and then later have a written confession it is still the prosecutors burden of proof to prove them guilty. It will be easy and non contested but it's still the prosecutors job. The accused is just that, accused, until the prosecutor has proven beyond all reasonable doubt that the accused is guilty.
- Now lets look at this from a scientific point of view. Noted Nobel Prize winner after years of painstaking research comes up with a new hypothesis. Because this person is highly respected in their field and because this hypothesis is a natural conclusion of their previous work as a layman I'm prepared to give them the benefit of the doubt but that is neither here nor there. The scientific world will require this person, however respected, to provide proof of their assertion. They will be expected to publish their research and back up their assertions. Again there is nothing to do with probabilities here.
- And then let's look at woo. The person who makes a claim for, lets say, homoeopathy is in exactly the same position as the Nobel Prize winning scientist. I, as a layman, might be slower to give them the benefit of the doubt but again that is not at issue. Just as the Nobel Prize winning scientist is expected to back up their claims with valid research I demand the same of the woomeister. The burden of proof is upon them just as it is for the scientist.
- So probabilities have absolutely fuck all to do with anything. It doesn't matter if, as a Bayesian, you want to see probabilities here, there or everywhere. We're not talking about whether a proposition is probable, we're talking about who has the burden of proof to back up this proposition. Jack Hughes (talk) 10:45, 17 March 2011 (UTC)
- I completely agree. Heck, this is what I've been trying to get at in the paragraphs above: that the idea of burden of proof is a social guideline used for things like deciding court cases, not a mathematical or probabilistic rule of inference or rationality. Tetronian you're clueless 15:12, 17 March 2011 (UTC)
Onus probandi[edit]
Should we add the latin name Onus probandi?-- Pedja (speak up, contributions) 21:20, 8 February 2014 (UTC)
- Done--"Shut up, Brx." 21:34, 8 February 2014 (UTC)
- Thanks. I could have done it myself, but since I just joined, thought I'd better ask first :)-- Pedja (speak up, contributions) 10:56, 9 February 2014 (UTC)
So I found this[edit]
I've recently found This essay at a site called http://www.apologeticsindex.org/. I have a question about the essay. It's written in such a way it seems like word salad. Is it Not even wrong?--TemplarJLS (talk) 13:41, 12 September 2014 (UTC)
Some newly added material[edit]
An anonymous editor added this:
==Scepticism and Pluralism== Although Cartesian scepticism lends to an effective reductionistic method for verifying and refining established truths, the opposing utilitarian approach of epistemological pluralism is equally valuable. The first approach uses scepticism as a default position for all concepts (all ideas are false until proven otherwise), whereby a null hypothesis is preferred against a theory until evidence statistically favors the theory. Epistemological pluralism uses ambivalence as a default position for all concepts (all ideas, including mutually exclusive ideas, are true until proven otherwise), whereby a concept's utility to map established information and predict new information is the validating factor for what is considered true. Two contradicting concepts will be held as true so long as both concepts satisfy all available information and only to the extent that they can be utilized to draw explanations. An example is that both the quantum model and Bohr-Rutherford model of the atom are true up to a specific utility, and beyond that when more evidence is considered, the quantum model is exclusively true.
Scepticism and pluralism are two epistemological strategies both compatible with science. The difference is how burden of proof is handled. From the first approach, the burden of proof is automatically on the proponent of any concept and by default the concept is considered to be false. From a pluralist approach, the burden of proof falls upon the challenger of the concept to demonstrate an inconsistency or a different concept with preferential utility.
I think it would benefit from a rewrite, disambiguation of "skepticism" and sourcing before being reintroduced into the article.--ZooGuard (talk) 18:45, 2 April 2015 (UTC)
- Actually, since the rest of the rewrite depends on that section, I'm going to put it back in. I'm washing my hands of this article. I think it was bad before, and now it's shite, but I don't have the time and skills to fix it.--ZooGuard (talk) 19:17, 2 April 2015 (UTC)
-- Thanks chief. You can flounder rudely elsewhere. (Molzahn) 16:24, 2 April 2015 — Unsigned, by: 204.63.255.225 / talk / contribs
- When random BoNs tell you to get off "their" wikilawn... PacWalker 00:45, 3 April 2015 (UTC)
-- No kidding. My experience hasn't been a positive one so far. (Molzahn) 20:51, 2 April 2015
- Your contributions haven't been very positive either. Read back the first line quoted above: "Although Cartesian scepticism lends to an effective reductionistic method for verifying and refining established truths, the opposing utilitarian approach of epistemological pluralism is equally valuable." Is that how you talk in real life? Do you think this is an easy tone for readers to follow? WèàšèìòìďMethinks it is a Weasel 00:55, 3 April 2015 (UTC)
-- You mean to say that you personally haven't liked my contributions. Speak clearly. Education background is part of it. I did ask what the demographic was for this site - and that is still an honest question that stands. Wikipedia doesn't gear itself toward younger users. Does this site? If mid-teenager is the appropriate level to write to that's fine, there's no reason to drop to the maturity of an 8 year old to express that. Do yourself a favour and don't throw temper tantrums.
We're talking about a site dedicated to specific subjects where we can assume users are going to take the time to back-check facts and absorb information. Was my first edit perhaps a touch verbose? Sure. I went back re-edited to lower the language level. Look again at my last edit. You can dumb things down to a point, but after a point the meaning becomes altered. This article needs an overhaul. (Molzahn) 21:08, 2 April 2015
- Since you insist on such wonderfully stiff formalities, please justify all of the above assumptions about this website's audience, particularly the assumptions that it will fact-check and does not include anyone less educated than your benchmark, whatever the hell that actually is. PacWalker 01:52, 3 April 2015 (UTC)
204.63.255.225 (talk) 02:14, 3 April 2015 (UTC) Yes, thank you. I will clarify. What is the demographic of this site? Why is article professionalism a bad thing? I have no qualms with lowering the language. Here is a reference to the latest iteration of changes I proposed (http://rationalwiki.org/w/index.php?title=Burden_of_proof&curid=6401&diff=1445494&oldid=1445484). Shall we discuss them?
"Burden of proof" follows from "Don't harm" principle[edit]
I have derived it here, under impression of this video and denialist who say that "because scientists misinform us and did not confirm climate change at 100%, we should not change our lifestyles. If bad things happen, scientists will save us (from climate change/resource depletion)". This is classical denialist "relax" position "from ignorance" which is certainly irresponsible. It is moronic to say that "sientists deceive us and will save us". It is irresponsible to close the eyes on the threats we are heading to. But, the authority of Russel teapot defends all their crap. They tell that they are right because they should not believe the teapot unless it is proven to them (lets leave aside the fact that it will never be absolutely proven if you are denialist).
Indeed, why should denialist prove something if proposition of climate change and resource depletion must be on the shoulders of the alarmists, who propose this, as Russel taught? Using science to cover up a greatest crime and most moronic thing in the history looked suspeicious to me. Risk Analisys had to tell that this is wrong. And indeed I have discovered a PRECAUTIONARY PRINCIPLE there, which tells that what is to be proven depends on consequences
if an action could potentially causing harm to the health or public/ecology, without scientific consensus, the burden of proof that it is not harmful is on the shoulder of the party taking the action. In other words, the precautionary principle prefers "false alarm" (Type I) to "miss" (Type II).
It it is defenitely disagrees with Russel's phylosophy of science. And with law practices. Or it is not?
Looking at the children, who beat each other for no reason I have formulated the principle "you should not harm anybody unless you are absolutely sure that he is guilty." You cannot do it for fun or preventively. It is not a big deal if you cherish somebody without a reason. You may let suspected to go if you are not sure that he is a culprit. This can be considered as doing good somebody for no reason, which is not bad whereas attacking an innosent would really hurt. That is why we must assume good faith to have a good discussion.
Recently, I have realized that this asymmetry is nothing more than a manifest of do not hurt principle. Asssuming good faith also works in the court where prosecutor must prove your guilt and let you go otherwise. Your innosense (good faith) is assumed by default. This is embodyment of dont hurt principle.
I have also realized that it Russel's Occam principle also follows from don't harm. Yes, introducing new creatures overly complicates the theory. Extra creatures make it harder to learn, and use. They are garbage. Introducing new notions is harmful! That is why Occam razors them and Russel tells that you should not introduce them in the first place unless there is a serious reason, justifying them. That is why "burden of proof rests on the opponent" appears in the math/science also. But, it is nothing more than instance of don't hurn, which is actually first principle.
Now, everything matches together so ideally that I think that my guess that all these major principles, "presumption of innosence", "assuming good faith", "burden of proof on the shoulders of prosecutor/proponent" and precautionary principle all follow from "dont' hurt" or minimization of loss. Probably your site is not the place to challange the original ideas. I therefore started a discussion here. Is it good enough to be included into the article? Can you confirm my hypothesis? --Valtih1978 (talk) 08:56, 2 June 2015 (UTC)
Attribution[edit]
Some content from http://evolutionwiki.org/wiki/Shifting_the_Burden_of_Proof The FCP Foundation (talk/stalk) 17:21, 20 September 2015 (UTC)
Usefulness of burden of proof[edit]
What's the usefulness of a burden of proof? What's it good for? If I'm a rational agent attempting to maximize my utility, I need my beliefs to correlate with reality. That means it behooves me to investigate things that might change my beliefs. I can't say: look, my belief that the availability of deadly weapons and murder rate by deadly weapons are uncorrelated matches the null hypothesis, so I'm free to hold this belief without examining it! That would just be stupid.
I suppose it's a useful concept to have inside your head when you're trying to convince others of a proposition, to form a sufficiently strong argument. But the example in the video is of a proposition that has no effect on my decisions. The existence of a god, on the other hand, is important to my decisionmaking, so I investigated the idea myself, even though my position was one that people here would say bore no burden of proof.
Perhaps it's useful to distribute the effort of gathering evidence in a fair manner? A concession to practicality rather than a philosophical reality?
Or for issues that have already been studied a fair bit where the claimant has a novel or heterodox point of view, it could be handy. But that's not "I don't have to provide evidence and you do!" Instead, the claimant has a large amount of evidence against their claim that they must overcome.
Dhasenan (talk) 12:27, 23 March 2016 (UTC)
- This is a joke right? -EmeraldCityWanderer (talk) 13:46, 23 March 2016 (UTC)
- I thought the first paragraph of the article was fairly explicit. MyHatIsBread (talk) 13:51, 23 March 2016 (UTC)
- Maybe you should watch the video in the article if the text isn't doing it for you...? Reverend Black Percy (talk) 13:57, 23 March 2016 (UTC)
- My garden is full of fairies! Over to you Dhasenan.--Bob"Life is short and (insert adjective)" 17:32, 23 March 2016 (UTC)
- If the others have not made it clear, to "having no burden of proof" does NOT necessarily equal "there is no need for me to look for proof for either side". If Officer Otterton says to Detective Deerborn, "I think Suspect Skunksworth was NOT home last Tuesday at 9PM, where she usually is every week. In fact, I think she was at the crime scene!", not only can Detective Deerborn ask Officer Otterton for proof of his accusation, he can start investigating Suspect Skunksworth's whereabouts himself! What it does mean is that it's not as much his responsibility to do so. Detective Deerborn can just say to Officer Otterton, "Well, I'm sorry Officer, but we don't have the time or resources to chase every single lead. Unless you've got more proof than just conjecture, I can't do anything about it." (why yes I've seen Zootopia recently why do you ask)
- And on that thought, I don't understand the last paragraph of yours. They're pretty much the same thing. "I don't have to provide evidence [that disproves your heterodox view] and you do [in order to overcome the large amount of evidence we have]!" ℕoir LeSable (talk) 19:49, 23 March 2016 (UTC)
Richard Carrier[edit]
https://www.richardcarrier.info/archives/15029 --Scherben (talk) 16:13, 28 January 2019 (UTC)
- That's so many goddamn words for "from different perspectives the baseline truth is different and thus the burden of proof is not controlled by simple, easy-to-follow rules." ikanreed 🐐Bleat at me 16:26, 28 January 2019 (UTC)