Texas sharpshooter fallacy

From RationalWiki
Jump to: navigation, search
Part of the series on

Logic and rhetoric

Icon logic.svg
Key articles
General logic
Bad logic

The Texas sharpshooter fallacy (or clustering fallacy) occurs when the same data is used both to construct and test a hypothesis.

The fallacy is an imprecision fallacy and an informal fallacy.


[edit] Explanation

The fallacy's name comes from a parable in which a Texan fires his gun at the side of a barn, paints a bullseye around the bullet hole, and claims to be a sharpshooter. Though the shot may have been totally random, he makes it appear as though he has performed a highly non-random act. In normal target practice, the bullseye defines a region of significance, and there's a low probability of hitting it by firing in a random direction. However, when the region of significance is determined after the event has occurred, any outcome at all can be made to appear spectacularly improbable.

The Texas sharpshooter fallacy uses the same data to both construct and test a hypothesis. A hypothesis must be constructed before data is collected based on that hypothesis. If one data set is used to construct a hypothesis, then a new data set must be generated (ideally, in a different way, based on predictions made by the hypothesis) to test it.

[edit] Examples

  • The physicist Richard Feynman once started a lecture on statistical physics by reciting a license plate number he had seen on the way in and asking his students what the probability was that he'd see that particular number. The probability, of course, was quite low. But this is true no matter what license plate number one sees, and unless it has an independently defined significance, this probability is meaningless.
  • A million participant raffle was drawn, and Joe was found to be the winner. Afterwards, someone points out that the odds of Joe winning are a million to one, and thus, he couldn't have won randomly and must have cheated. Of course, the chances of anyone else winning was also a million to one, and this person could've accused everyone of cheating. However, the chances of somebody winning is 100% guaranteed. In this case, Joe lucked out. Somebody had to have lucked out.

[edit] "Random" evolution

Creationist and intelligent design arguments claim that the chances of a protein molecule forming "randomly", or a cell forming "randomly" via abiogenesis, or the universe forming "randomly" into what we see today are incredibly low, and thus it must have been designed. This argument is extremely faulty in that it doesn't acknowledge that physical processes are not random, but are guided by the laws of physics, chemistry, and eventually, biology: evolution via variation and natural selection.

In late 2010, "Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect", a psychology paper by Daryl Bem, ostensibly provided evidence of precognition. In Bem's experiments, a small but statistically significant number of test subjects' responses appeared to be influenced by conditions which appeared later in the tests.[1] However, Bem has acknowledged forming some of his conclusions after the tests, rather than testing fixed hypotheses as any rigorous application of the scientific method should. He has also stated that, before concluding and publishing his research, "I purposely waited until I thought there was a critical mass that wasn't a statistical fluke".[1] While this may seem logical at first glance, deliberately waiting for such a "critical mass" actually means stopping research at a point when the results appear favourable to the hypotheses rather than continuing through a pre-set number of experiments before checking for overall findings.

[edit] Young Earth thinking

Much of young earth creationism relies on this form of post hoc reasoning. This is most clearly demonstrated in fundamentalist Christians' discussions of how the flood created geologic structures. Their ideas rely on finding data and constructing a hypothesis around that data, with no further testing of these ideas after this construction. This is a clear example of this particular fallacy.

[edit] Crazification

The crazification factor is often an example in popular usage: you will find endless examples of people online go ad hominem when 20-30% of people do something the speaker doesn't like, or are even polled as holding an opinion they don't like.

[edit] Legitimate use

You do not commit this fallacy if you:

  • calculate the probability of a particular event after the fact based on a criterion that would have been clearly significant even before the event occurred.
  • re-test an observation to determine if previous clustering may have been due to chance.

[edit] See also

[edit] External links

[edit] Footnotes

  1. 1.0 1.1 See the coverage in New Scientist, "Is this evidence that we can see the future?", Peter Aldhous, 11 November 2010.

Personal tools

In other languages