Spotlight fallacy

From RationalWiki
Jump to navigation Jump to search
Not to be confused with the spotlight effect.
Cogito ergo sum
Logic and rhetoric
Icon logic.svg
Key articles
General logic
Bad logic

The spotlight fallacy is a logical fallacy that occurs when highly publicized data on a group is incorrectly assumed to represent a different or larger group.

The fallacy is an overgeneralization and an informal fallacy.

Examples[edit]

In trials[edit]

This fallacy often appears in clinical trials, and this is why a large sample size is needed to negate the effects of highlighting a group that is too specific. If too few people are included in a trial, then it may not necessarily represent enough people for the result to be applied to the population as a whole. Of course, in a perfect trial, everyone in the entire world would be included, but this isn't practical. In real clinical trials, hundreds, if not thousands, of people are included and when writing up the results, the authors always mention the number of people in the trial and what this number allows them to conclude (a high number has a high "power" so is more reliable). More importantly, they must mention if they were only testing one group (healthy men aged 40-50, for example). It's the qualitative specifics of this group under study that forms the basis of the spotlight fallacy: if you only study middle-aged men, you can't reliably extrapolate your medical data to teenage girls.

However, the popular media tend to over-extrapolate such data, and this is where the spotlight fallacy occurs; they assume that one trial applies to everyone. For example, a newspaper headline declares that pills cause heart attacks, but fails to mention that the study showing this only looked at middle-aged men on high doses of the pills. Not only is age and sex a barrier to making a generalisation, but the reasons that these people were taking high doses of pills acts as a further confounding variable. The effect, even if statistically significant, cannot be extrapolated to everyone without more data from a more diverse sample. In a more malicious example, newspapers in the early 2000s reported that the MMR vaccine caused autism. This was based on a study by Andrew Wakefield that was never particularly compelling. He had a sample size of 12 and only looked at children who were autistic and had bowel problems, making his conclusion that the vaccine was dangerous almost guaranteed. Because of the spotlight fallacy and the over-extrapolation by the media, the results from this small group of specifically-targeted individuals was incorrectly applied to the whole population at large, and panic ensued.

In polls[edit]

Poll data can often be very misleading because of this. Does a poll sample everyone equally? For example, a survey about government politics being taken on the street might produce radically different results if it was done outside a Job Centre[note 1] (where the poll is likely to attract a higher number of unemployed) than it would if done in The City[note 2] (where a larger number of bankers or traders are likely to be asked). Indeed, a poll asked on the street only asks people who would stop to be asked questions — it would entirely exclude large demographics of people who are too rushed, busy, or just not there.

On the Internet, polls are even less reliable. It may seem simple enough to put a poll on one's website and take the results as good when you get several thousand hits; but recall, this poll is only polling people who visit that website. WorldNetDaily is pretty much guilty as sin for this, as its polls are only targeted at the wingnuts who visit and read that site; so is it any surprise when their polls seem to indicate that 95% of Americans think Barack Obama was born in Kenya? And then, of course, there is the possibility of intentional corruption of results, such as when /b/ raids a poll to make a humorous result appear, such as a poll asking people what they thought was the most inconvenient part of a closing. On day one, 4 people answered, with a 25:25:50 split between three possible answers. On day two, 100% of the votes (from many, many more than 4 people) were for option 3 "pools closed."[citation needed]

Hence reliable poll data needs to be done over a wide area, not just over a large sample. Unless the aim is only to look at a particular demographic, which is a legitimate exercise if you're honest about it, or fudge the result in your favour. A recurring theme on PZ Myers' blog, Pharyngula, is Pointless Polls. Here he asks his minions to skew the results away from the pollster's "intended right answer" so as to show the absurdity of the internet poll.

In religion[edit]

The more extreme antitheists tend to assume that all religious people are extremist Bible thumpers, based on examples of individual extremists who happen to be part of the religion. While these individuals may indeed happen to be members of that religion (despite some religious arguments), this in no way proves that they are the majority, or in any way representative of the mainstream views of that religion.

In webcomics[edit]

For example, as observed in Dinosaur Comics,[1] a cop may think that everyone is a rule-breaking, murdering goat, since those are the only people a cop comes into contact with, but in reality, this is false.

See also[edit]

Want to read this in another language?[edit]

Se você procura pelo artigo em Português, ver Falácia do holofote.

External links[edit]

Notes[edit]

  1. Apparently that's where unemployed Brits congregate to get their cheques.
  2. This is what Brits call the financial centre of London, basically.

References[edit]