Fallacy of the transposed conditional

From RationalWiki
Jump to: navigation, search
Part of the series on

Logic and rhetoric

Icon logic.svg
Key articles
General logic
Bad logic

Alexander and Aristotle.jpg

The fallacy of the transposed conditional is a common logical error when attempting to draw conclusions based on experimental data. Essentially it is confusing the difference between the probability of a set of data given a hypothesis, and the probability of a hypothesis given a set of data.

It is usually easiest to understand through analogy. If our hypothesis is that it is raining we can be almost certain that we will observe clouds in the sky, however if we observe clouds in the sky we cannot say that it is almost certainly raining. So the probability of our data (observing clouds) is nearly 100 percent given our hypothesis (that it is raining), but our hypothesis (that it is raining) is not a 100 percent given only our data (observing clouds).

In order to relate the probability of our data given our hypothesis, to the probability of the hypothesis itself we require additional information. We need to know the probability of our hypothesis compared to alternative hypotheses before the data was collected. We can then use our observations to update the probability of our main hypothesis. Bayesian statistics and Bayes' equation is the main method of doing this, however, most statistical analysis is usually frequentist and so any attempt to relate the calculated probabilities to the original hypothesis is a fallacy of the transposed conditional.

[edit] See also

Personal tools
Namespaces

Variants
Actions
Navigation
Community
Tools
support