Draft:Bayesian statistics

From RationalWiki
Jump to navigation Jump to search
Information icon.svg This is a draft that anyone is free to edit as they would a mainspace page.

Do not add categories to draft pages; use {{draft categories}} instead for a view.

Contributors should nominate draft articles for deletion only if they believe that the article is not applicable to RationalWiki's mission.

Articles involving living persons must conform to our guidelines on biographies of living people.

Poetry of reality
Science
Icon science.svg
We must know.
We will know.
A view from the
shoulders of giants.

Formerly known as Inverse probability,[1] Bayesian statistics is one of the two main frameworks of statistics used within science. It was primarily developed by Pierre-Simon Laplace, beginning in 1774,[2] and by default became the dominant framework used by science until approximately the Second World War, when most branches of science replaced it with the second framework, frequentism.

In brief, it holds that probability is a measure of your belief in a hypothesis, and that the likelihood of a hypothesis being true, given the observed evidence, is proportional to the probability of observing that evidence if the hypothesis is true and the prior probability of that hypothesis being true, absent that evidence. The usage of Bayesian statistics in trendy fields like machine learning and neural nets, combined with its underdog status within the sciences, help make it catnip for cranks.

Key components[edit]

Probability as quantized belief[edit]

A foundational assertion of Bayesian statistics is that probability represents a "credence," or degree of belief, in some proposition.

The regularity which astronomy shows us in the movements of the comets doubtless exists also in all phenomena. The curve described by a simple molecule of air or vapor is regulated in a manner just as certain as the planetary orbits; the only difference between them is that which comes from our ignorance.

Probability is relative, in part to this ignorance, in part to our knowledge. We know that of three or a greater number of events a single one ought to occur; but nothing induces us to believe that one of them will occur rather than the others. In this state of indecision it is impossible for us to announce their occurrence with certainty. It is, however, probable that one of these events, chosen at will, will not occur because we see several cases equally possible which exclude its occurrence, while only a single one favors it.[3]

It is a fact that our degrees of confidence in a proposition habitually change when we make new observations or new evidence is communicated to us by somebody else, and this change constitutes the essential feature of all learning from experience. We must therefore be able to express it. Our fundamental idea will not be simply the probability of a proposition , but the probability of on data .[4]

Contrast this with another view of probability, that it represents the long-term outcomes of a system.

In a humorous vein we might say that the honor of discovering the category of natural phenomena that generated the frequentist theory of probability belongs to the first crook who loaded his dice. Before embarking on this project the particular individual must have realized that the relative frequencies of a die falling this way or that way are 'persistent' and constitute this die's measurable properties, comparable to its size and weight. Having discovered this fact (and this was a 'scientific discovery'), the crook decided to use the discovery for his own benefit (and this might be described as the initiation of a special 'technology').

It so happens, ... that very substantial sections of modern science and of technology are working hard more or less to follow the steps of the above crook (no offense is intended!).[5]

As a consequence, the underlying goal of Bayesian statistics is to combine propositions together to form some model of our observations. If that model is a good match, it will earn a high degree of belief, relative to other models that consider the same evidence but are a poor match for what we observe. This also implies that credences can be quantified, in some fashion.

Relativism[edit]

If our degrees of belief in a model or cause can be quantified as a number, there is a straightforward way to calculate it.

Sixth Principle. Each of the causes to which an observed event may be attributed is indicated with just as much likelihood as there is probability that the event will take place, supposing the event to be constant. The probability of the existence of any one of these causes is then a fraction whose numerator is the probability of the event resulting from this cause and whose denominator is the sum of the similar probabilities relative to all the causes; if these various causes, considered à priori, are unequally probable, it is necessary, in place of the probability of the event resulting from each cause, to employ the product of this probability by the possibility of the cause itself.[6]

In mathematical notation, this translates into

where is the probability of the th model under consideration, on the assumption that we have observed evidence ; the probability of the observed evidence, on the assumption that the th model under consideration is the correct model; and the probability that the th model under consideration is the correct model, absent any observed evidence. is commonly called the "likelihood function," often shortened to the "likelihood," while is known as the "prior." To maintain consistency with that last point, is often known as the "posterior," to the delight of teenagers across the globe.

Note that this expression, dubbed "Bayes' Theorem," implies that probability is relative. No one model or "hypothesis" is treated differently than any other by the framework. The probability of one hypothesis is relative to the probabilities of all others used in the calculation. Adding another hypothesis to the calculation will never increase the probability of any specific hypothesis, and it will always decrease the probability if that new hypothesis has a non-zero degree of belief[7]. The only limit on the number of hypotheses in play is our ability to handle the calculations; in some rare cases, when the math aligns, it is possible to wrangle a non-finite number of them.

Hypotheses[edit]

Hypotheses or models can be dependent on zero or more free parameters. For instance, when tossing a coin, one possible hypothesis is that the coin is biased such that, if flipped repeatedly, the proportion of times heads came up relative to the total number of flips comes arbitrarily close to . We can imagine another hypothesis that asserts it is instead , , or some other value that is between 0 and 1. All possible values of form a one-dimensional "parameter space" shared by all similar hypotheses, sometimes dubbed a "population" of potential parameters. In this case is a real number with strict bounds, but other models may have parameters with no bound, or parameters that can only be integers or chosen from a finite number of categories. The number of dimensions can also vary and is typically a fixed finite value, though arbitrary- and infinite-dimensional parameter spaces are possible[8][9].

A hypothesis that occupies a single point within a parameter space is called a "simple" or "point" hypothesis. Mathematically, we can express this as

where is a point in parameter space. A "composite" hypothesis instead spans a range of values within a parameter space: the proportion of heads is greater than one half, for instance.[10] Composite hypotheses can also be a weighted combination of point hypotheses[11].

If the parameters of a hypothesis are unknown, we can use the above equation to determine the relative probabilities of points within the parameter space. This transforms the posterior from a fixed probability into a probability distribution over the parameter space, a practice so common that "posterior" is often assumed to mean "posterior probability distribution."

Hypothesis testing[edit]

Bayesian statistics does not provide an explicit way to "test" whether a hypothesis is worthy of further consideration. The relative likelihood of two hypotheses can be easily compared, as dividing one variant of Bayes' Theorem by another variant gives us

which is known as the "odds ratio" form of Bayes' Theorem, and the left-hand value specifically the "Bayes factor."[12] If there are multiple observations, and those observations are independent of one another, then we can use the posterior as the prior for the new data and update our beliefs.

As one hypothesis becomes more likely than another, the Bayes factor will either increase without bound, or come arbitrarily close to zero. The former means that Bayesian statistics can be used to argue in favour of a hypothesis. The math explicitly discourages declaring any hypothesis to have zero probability, as the interchangeability of hypotheses means we can break the odds ratio form of Bayes' Theorem via division by . Bayesian statistics thus strongly endorses Cromwell's rule: "I beseech you, in the bowels of Christ, think it possible that you may be mistaken." Begin your analysis with as many hypotheses as you can calculate, and end it with the same number.

As a consequence, Bayesian statistics does not reject hypotheses. If the ability to accept or reject hypotheses is desired, we can impose a cutoff on or a Bayes factor and declare that beyond that point we have falsified a hypothesis. This can cause issues should later evidence wind up favouring a discarded hypothesis, but the true problem here lies not with the statistical framework but between the keyboard and the chair.

Priors[edit]

Earlier, the prior was defined as "the probability that the model under consideration is the correct model, absent any observed evidence." Note that this only rules out using evidence we expect to observe, it says nothing about using secondary evidence we already have on hand.

For example, consider Darryl Bem's infamous 2011 paper on precognition, which he defines as "conscious cognitive awareness ... of a future event that could not otherwise be anticipated through any known inferential process."[13] The hypothesis "these study subjects exhibit some level of precognition" depends on the hypothesis "precognition exists." What odds would you give to that second hypothesis? We know that modern physics rules out any unknown fields we could use to "hide" a signal, so it must come via a known force.[14] While our eyes are very responsive to a narrow range of electromagnetic radiation, and our skin weakly responsive to higher-energy radiation, there is no known antenna in the human body capable of picking up lower frequency signals. Our gravity sensors are crude, and both the strong and weak force operate on much smaller scales than we can sense. This signal would also have to travel backwards in time, which has never been demonstrated in practice despite hundreds of years of experimentation, and it would have to somehow be precisely aimed to a location on Earth in the past, which is difficult since Earth orbits around the Sun at approximately thirty kilometres per second. All that evidence suggests the prior probability of "precognition exists" is approximately zero, before we have looked at a single of Bem's data points. This places an upper limit on the prior probability of "these study subjects exhibit some level of precognition."

We can convert the prior into a probability distribution via the same means as the posterior, freeing us from having to chose a specific value for any unknown parameters. Increasing the range of plausible values within the prior distribution also increases the range of plausible values within the posterior distribution. As additional data is added, the likelihood (usually) narrows the range of plausible values in the posterior, gradually sharpening our conclusions and watering down the influence of the prior. Conversely, even when data is sparse, the prior still allows us to draw at least a vague conclusion about what we have observed. We can even get a feeling for how plausible the prior is by drawing random samples from its probability distribution, and checking them against what we expect the associated hypothesis to predict. Priors are extremely powerful within Bayesian statistics.

There are several ways to classify priors. A "non-informative" prior is intended to be as weak as possible, easily altered by the likelihood as evidence is added. In the example of flipping a coin, we could say that all possible values of , the proportion of times heads came up relative to the total number of flips, are equally likely. Laplace followed the "principle of indifference:" when you have no reason to favour one point in the parameter space over any other, make them all equally probable.[15] Some thought suggests this is a terrible prior to use for a coin flip, as our experience suggests encountering a coin that lands heads ninety-nine times out of a hundred is much rarer than one that's fifty-fifty. Rather than assuming complete ignorance, it usually makes more sense to apply our prior knowledge to create an "informative" prior. When we use our prior knowledge but purposefully dilute it, just in case we're overconfident, we are invoking a "weakly-informative" prior.

Mathematically, priors distributions can also be "proper" or "improper." The former follows all the rules of a valid probability distribution, for instance if we were to sum up the probabilities of every possible point in the parameter space, it would sum to 1. The latter type of prior breaks at least one rule. A common example happens when fitting a line to a scattering of data points; the prior for the spot that line intercepts the Y axis often invokes the principle of indifference, using a flat distribution. But since the intercept is allowed to be any real number, any attempt to sum up the probabilities for each possible intercept will rapidly shoot off towards infinity. Most non-informative priors tend to be improper (the coin example being one exception), which places some limits on their use.

Some priors are known as "conjugate." For instance, if the prior for a hypothesis is identical to a Beta probability distribution with known and parameters, and the likelihood is a Binomial probability distribution with known and parameters, then the posterior is also a Beta distribution. If the prior for a hypothesis and the likelihoods are both Gaussian or Normal distributions, the posterior is also a Gaussian/Normal distribution. Conjugate priors excel in iterative updating of our beliefs, when they apply, and are very convenient to work with mathematically. To differentiate between the model parameters and the parameters of these statistical distributions, the latter are known as "hyperparameters."

A common technique in Bayesian statistics is to use more than one prior. We can sometimes factor the prior out of Bayes' Theorem, via its odds ratio form.

The ratio of likelihoods becomes a measure of the weight of the evidence, telling us how much our beliefs are shifted towards or away from each hypothesis. We can then experiment with multiple priors, to see how strongly our conclusions are warped by our prior beliefs, or estimate how much evidence we'd need for the posterior of one hypothesis to become more likely than another. Even when the prior cannot be cleanly factored out, it's usually worth the effort to repeat the analysis multiple times with different priors, to examine how much our posterior belief changes.

Marginalization[edit]

Point hypotheses can be deeply problematic. The odds of randomly drawing from the real number line is zero, as there is only one successful outcome but uncountably infinite alternatives. By extension, the prior probability of "the proportion of times heads came up relative to the total number of flips comes arbitrarily close to " is also zero; Mother Nature never keeps her room that clean. This argument does not apply when a parameter only offers finite choices, but those tend to be the minority. Point hypotheses with this problem are known as "nil hypotheses."[16]

Composite hypotheses are usually free of this problem, and they are formed from point hypotheses via a process called "marginalization." This is just a fancy term for "adding" or "integrating," the correct choice depending on whether or not a parameter is discrete or continuous. The latter, put mathematically, is

Marginalization is also useful for handling "nuisance parameters." When fitting a line to a scatterplot, it is common to express the model in terms of the slope of the line, where the line intercepts the Y axis, and how much variance there is between the data points and the line. The first two are of far more interest than the third, so the latter is often marginalized away to create a probability distribution over two parameters, instead of three.

Examples[edit]

Precognition[edit]

Darryl Bem's 2011 paper on precognition (see Priors) ran a total of nine experiments.[13] Most of them can be modeled as a Bernoulli process (the mathematical ideal of a coin toss), so their likelihoods are a Binomial distribution. That means we can greatly simplify the analysis by making the Beta distribution our prior and invoking conjugate priors. For this audience, "precognition does not exist" would be the first hypothesis you'd be interested in, but note that isn't enough information to use in our analysis. As noted earlier, the hypothesis we can actually quantify combines "the observed evidence could be produced by chance" and "precognition does not exist." While these may not seem distinct, contrast "precognition does not exist, but Bem fudged the data" with "precognition exists, but Bem chose test subjects without precognitive ability." Recognizing this is important, as the "chance" hypothesis depends on a parameter while "precognition isn't a thing" does not. Mathematically, we would call the former and the latter .

The second hypothesis for comparison isn't as clear-cut, merely stating "precognition exists" isn't quantitative. Two data points are useful here: when breaking codes during World War II, Alan Turing used the " deciban" as a basic unit of probability, and Irving Good claims a "deciban or half-deciban is about the smallest change in weight of evidence that is directly perceptible to human intuition."[17] Translating from an odds ratio to a percent, we get roughly a 55.7% chance of success. While that claim was made without evidence, a 2009 study tested how biased a coin toss could be.[18] Thirteen grad students, given a few weeks to practice in private, were able to land heads in a fair-looking coin toss 56.7% of the time, on average. It seems trivially easy to bias a coin toss, and yet most people still consider the process free of bias. By extension, asserting that if precognition exists the average success rate is one deciban threads the needle between the lack of abundant evidence for precognition and the assertion it exists.

If our prior is described by , and our likelihood is a Binomial distribution with successes over trials, then our posterior is described by . We will invoke the principle of indifference and use a prior of ; while it's tempting to make that a free parameter, the experiment we're considering has and , so all but the most extreme possibilities are drowned out by the data.

Note that this posterior is a distribution, not a quantity, so we need to marginalize away the sole parameter, . As pointed out in Marginalization, we wish to avoid point hypotheses. One solution is to notice that if we know the probability of success is , then we expect successes and failures over trials. We can use to weight every possible choice of when marginalizing:

Thanks to conjugate priors, that is simple to compute.

where is the probability of success for and is the Gamma function. Don't worry, its bark is worse than its bite; if is an integer, then . Pick your inputs carefully, and a lot of cancellations will happen without any need to invoke calculus. The output is an odds ratio, where values greater than one signify evidence that favours , "the observed evidence could be produced by chance", over , "the observed evidence could be more successful than chance predicts, by one deciban," and vice-versa. We can then analyze how that changes our "precognition does/does not exist" prior. Since calculation is cheap, we'll also generate the corresponding p-value.

Bem (2011), Experiment 2
p-value[19]
2,790 5,400 1,751.1 0.014307

This is a real-world example of Lindley's Paradox.[20] If you throw Bayesian statistics at this experiment's results, this is pretty convincing evidence that precognition does not exist. How convincing depends on your prior beliefs, which the author doesn't know (they're not psychic!), but some math allows us to work around that:

If your prior credence for "precognition exists" was less than 99.943%, this evidence now has you giving equal or more credence to "precognition does not exist" ( being the odds ratio where both hypotheses are equally likely). As "precognition does not exist" is mutually-exclusive to that hypothesis and the two cover all possibilities, the same is true if your prior credence for "precognition does not exist" was greater than 0.057%. Insist on more than a 50/50 split before endorsing a hypothesis? Science is typically content with a 0.05 threshold for rejecting a hypothesis, so if we use that higher bar () then unless you had more than 98.927% credence in "precognition exists," you're now endorsing "precognition does not exist."

Conversely, if you throw frequentist statistics at the exact same experiment, with the exact same results, you're on firm ground to reject "precognition does not exist." Two different statistical frameworks have led us to two different conclusions! How can this be?

Experiment 2 from Bem (2011), when we uniformly scale both k and n to simulate the accumulation of evidence. The left-hand side uses Bayesian statistics via Beta conjugate priors, while the right-hand side uses frequentism via two-sided p-values.

The chart at right isolates the two hypotheses we're considering here, and adds a third: the success rate exactly matches the final outcome of the experiment. The vertical axis on the Bayesian side of the chart shows how likely each hypothesis is. All three hypotheses start off with increasing likelihood, but as more and more evidence piles up the hypotheses that diverge from the observed success rate level off and then decline rapidly. The quantified portion of "precognition exists" happens to diverge more than "precognition does not exist"'s portion, and so this process plays out more rapidly. Even though quantified "precognition does not exist" is decreasing in likelihood by , the now-giant gap is sufficient to skew the odds ratio heavily in its favour.

The same basic process is happening on the frequentist side, the p-value of quantified "precognition exists" blows past most thresholds for rejection. However, that p-value is never calculated. Hypothesis testing with p-values asks us to form an "evil twin" of the hypothesis we want to prove, what it dubs the null hypothesis. "Precognition exists" implies the success rate for experiment 2 should not be , so the null hypothesis is that the success rate is exactly . This hypothesis, and this one alone, is the one we test. Being a "nil hypothesis" that we already know is false (see Marginalization), it's no surprise that it crosses the standard threshold for statistical significance as the evidence pile grows. We're thus told to reject "precognition does not exist," even though the chart shows we would have rejected that hypotheses far earlier had we instead made it the null!

You may also suspect this is an example of Cherry picking, as it only analyzed the second out of nine experiments. Other experiments Bem published did indeed have outcomes more favourable to precognition ...

Bem (2011), Experiment 5: Negative images
p-value
1,274 2,400 0.55357 0.002348

... so considering one experiment in isolation may make for an interesting demonstration, but it isn't saying much about precognition. We need to combine multiple experiments. In this scenario, Bayesian statistics only requires us to multiply together every odds ratio. With p-values, the process is not nearly as simple, but still doable.

Some of Bem's experiments cannot be fed directly into a binomial likelihood, however we can take some creative liberties. Experiments three and four involving timing reactions, which were merged for each test subject in the raw data.[21] A Poisson distribution would be more appropriate than a binomial, however Bem's write-up suggests interpreting subjects with a net retroactive time greater than zero as a "success", so we can extract and samples from those experiments. Bem uses a more complicated metric for experiments eight and nine, the weighted differential recall score, however we can reinterpret his results from the experimental group such that each practice word recalled counts as a "success" and each control word recalled is a "failure." While the latter isn't really accurate, the quantified "precognition does not exist" hypothesis suggests both types should be recalled with equal frequency, while the other suggests a net advantage for practice words, so we can shoehorn it in.

Bem (2011), as Bernoulli processes
Experiment p-value
Experiment 1: Erotic images 828 1,560 6.880e-01 0.014028
Experiment 1: Neutral images 238 480 6.092e+00 0.891104
Experiment 1: Negative images 246 480 2.434e+00 0.552981
Experiment 1: Positive images 536 1,080 5.541e+01 0.831337
Experiment 2: Negative images 2,790 5,400 1.751e+03 0.013767
Experiment 3: Retroactive priming (priming > 0) 56 97 5.799e-01 0.103769
Experiment 4: Retroactive priming II (priming > 0) 60 99 4.125e-01 0.026525
Experiment 5: Negative images 1,274 2,400 5.536e-01 0.002348
Experiment 5: Neutral images 1,186 2,400 1.355e+04 0.581550
Experiment 6: Negative images 1,242 2,400 2.191e+01 0.082710
Experiment 6: Erotic images 1,153 2,400 5.962e+05 0.057627
Experiment 7: Boredom, non-stimulus seekers 1,251 2,496 2.646e+03 0.888575
Experiment 7: Boredom, stimulus seekers 1,105 2,304 4.343e+05 0.052661
Experiment 8: Retro recall (successes) 932 1,813 2.104e+01 0.221981
Experiment 9: Retro recall II (successes) 554 1,062 2.355e+00 0.149202
consolidated N/A N/A 1.322e+27 0.000031

Lindley's Paradox has been taken to ridiculous levels. If you use Bayesian statistics and again demand an odds ratio above 19 before you endorse a hypothesis, then your prior for "precognition exists" must be greater than 99.99999999999999999999999856% before you fail to endorse "precognition does not exist." Nevermind needles in haystacks, that's on the scale of finding a solitary neon atom in 1.2kg of pure carbon! Conversely, if you used p-values you now have "four-sigma" result, a magnitude rarely found in the social sciences.

Before moving on, we can offer Bem one kindness. After all, Good did suggest that half a deciban, or a success rate of roughly 52.8%, was barely detectable by humans. Out of those thirteen grad students, only two failed to cross that success rate when flipping coins. A rate that low can easily be fudged by dropping a few "outliers" from your dataset, in most cases; nonetheless, it converts experiment 2 from solid evidence against precognition to weak evidence in favour of it. Let's run the numbers and see how "precognition does not exist" fares against this much stronger variant of "precognition exists," which we'll dub .

Bem (2011), as Bernoulli processes
Experiment p-value
consolidated N/A N/A 1.312e+04 0.000031
In search of H4, the hypothesis that is most favourable towards precognition existing given the experiments carried out by Bem in his 2011 paper. This chart shows that is a success rate of %.

We've gone from concluding precognition is astronomically unlikely to merely highly unlikely. Quite the improvement, but nonetheless not enough. We've only checked out two quantified "precognition exists" hypotheses, though. Computation is cheap, so let's scan over a few hundred alternatives in search of : "precognition exists, with a success rate that maximizes our belief in precognition given the experiments in Bem's 2011 paper." This is, obviously, a heavily cherry-picked hypothesis, but it does serve as a useful reference point for how seriously we should take Bem's paper.

The answer, as the chart shows, is "not very." The ideal rate for is somewhere around 50.81%. If you were flipping a coin a hundred times, you could create that much bias in your results by dropping a single toss. At best, shifts your belief by an odds ratio of 6:1 in favour of that hypothesis, relative to . If your credence in was above 14.8%, you now favour that hypothesis over ; if you want an odds ratio above 19 before endorsing, then you fail to endorse if you placed more than 23.4% credence in . Bem's 2011 paper is a better refutation of precognition than proof of its existence, but only if you use Bayesian statistics.

Odds of having COVID-19[edit]

It is illegal in most jurisdictions to introduce Bayesian statistics without some question about medical tests. The bog-standard example goes something like this:

You take a COVID-19 rapid antigen test, and find it comes back positive. What are the odds you actually have COVID-19?

It then gathers up the relevant statistics: the odds of a true positive, the odds of a false negative, and the prior odds of having COVID-19. The first two we can find by looking at that slip of folded paper in our COVID-19 test:

Clinical Evaluation of COVID-19 rapid test
Result COVID-19 Positive (RT-PCR) COVID-19 Negative (RT-PCR)
Positive 86 2
Negative 5 326

Assume that PCR tests for COVID-19 are 100% reliable, despite no test reaching that bar.[22] If is "a rapid-flow COVID-19 test returns positive," is "you have COVID-19," and is "you do not have COVID-19," then and . Those are the probability of a true positive and false negative, respectively. As for the odds of me having COVID-19, , let's say I live in a province with 5 million people and that only 200 currently have COVID-19. Thus and .

Plugging those numbers into Bayes' theorem, as per the law, and pausing for dramatic effect...

... we duly conclude we almost certainly do not have COVID-19. Turning to the camera in shock, we exclaim that the odds of a true positive were about 95%, so how could this be? At which point we repeat the calculation, but this time via the odds ratio form,

and pull out the base rate fallacy card. The positive test did dramatically increase our belief that we have COVID-19, however the base rate was so low that it was like dumping a glass of water into a swimming pool. The paradox is resolved, hooray for Bayesian statistics, and blah blah blah blah.

The bog-standard example is deeply misleading about the real world. In the setup, we implied you took this COVID-19 rapid test on a whim, but few people would think to shove a cotton swab up their nose for funsies. A more realistic setup would instead be:

You wake up with a headache, runny nose, and a bit of a chill after hanging out at the bar last night with friends. You notice your toothpaste tastes differently today. You have taken a COVID-19 vaccine, but that was nearly a year ago, and you know that the vaccine only provides strong protection for a few months. You take a COVID-19 rapid antigen test, and find it comes back positive. What are the odds you actually have COVID-19?

The base rate is almost certainly higher than in this scenario, because we've already taken quite a few mini-tests before ever reaching for that COVID-19 rapid test: do I have a headache? Do I have a runny nose? These tests do not provide particularly strong evidence, there's a lot of other health issues besides COVID-19 that they could be hinting towards, but as demonstrated in the precognition example small shifts in our likelihood can multiply together to make a very large shift. Bayesian statistics pushes us towards considering the totality of the evidence, no matter how weakly any one piece of evidence may shift our opinions.

That's not the only flaw of the legally-mandated presentation, either. Emphasis added by the editor:

Testosterone reference ranges for older men are long established in the urological literature. However, few studies have examined testosterone levels in young men. As a result, clinicians have struggled to counsel and evaluate young men presenting with concerns about testosterone deficiency. Contributing to this struggle is the fact that testosterone levels decline with age, yet we use the same age-independent cutoffs to evaluate young men for testosterone deficiency as we do for older men. In response, we performed the first study evaluating population-based testosterone levels for younger men in the United States. We also used the 2018 American Urological Association guideline for testosterone deficiency definition of a “normal testosterone” as the middle tertile of the population, to provide age-specific cutoffs for low testosterone levels in younger men.[23]

That paragraph has multiple issues, but the most relevant is how the "normal" level of testosterone in human beings is being determined: testosterone levels of the general population are taken, sorted from least to greatest, and then divided into three equal compartments. People who fall into the middle third are considered "normal," while the two-thirds of the population outside that compartment is considered "abnormal." How did they decide that two-thirds of the population have "abnormal" hormone levels?

... no randomized controlled trials have been performed to select the middle tertile as a cutoff value. Additionally, our study is limited by some shortcomings of the National Health and Nutrition Examination Survey database. Namely, the National Health and Nutrition Examination Survey does not specifically query men for hypogonadal signs and symptoms, and only 1 serum testosterone value was obtained from each subject.[23]

This is a terrible medical standard! How on Earth could any doctor take it seriously? This is the dark side of the realistic COVID-19 scenario: no-one tests their endocrine levels on a whim, instead they only approached a doctor after carrying out a lot of informal and weak tests themselves. That doctor probably wasn't an endocrinologist, either, but a general practitioner who would assess the symptoms first before even considering forwarding the patient to an expert, who did their own screening before even administering that test. Selection bias would make this standard seem to be on solid ground, especially if you had no idea what "abnormal" meant in this context and blindly trusted in the test.

Now consider what happens when you apply this same "iron-clad" standard to a person who shows few-to-none of the symptoms associated with low or high testosterone levels.

World Athletics, the international body governing athletics competitions, approved a new version of its “Eligibility Regulations for the Female Classification” on March 23, 2023. The new rules, which go into force on March 31, require women with higher than typical testosterone, and certain diagnoses of variations in their sex characteristics and hormonal sensitivity to go through medical procedures to reduce their testosterone levels to 2.5 nanomoles/liter for 24 months to be eligible to compete as women in any athletics event. These regulations are not based on any new scientific studies and have no apparent objective basis.[24]

You can wind up having a two-in-three chance of forcing people to take medical treatments they don't need. This is why it is vital to avoid magical thinking about your tests, believing them to be more conclusive than they actually are.

Now, to be fair, the American Urological Association's standards for testosterone aren't the only choice here. The Endocrine Society also splits the general population into three categories, but their middle category contains 95% of the population instead of one third.[25] Declaring a mere five percent of the population to be "abnormal" is a vast improvement over applying the same label to two-thirds. This same type of binary classification is used all throughout science, most notably p-values are often declared to be "statistically significant" if they fall outside the normal range of 95%, and "not significant" otherwise.

The “normal range,” even as a classification system, is almost impossible to defend. It classifies 95 per cent of the healthy population as “usual” and 5 per cent as “unusual” for all patients and for all tests, quite without regard to the specific penalty for misclassification in any given application.

The “normal range” arises from putting together two very small facts concerning the healthy population with a large amount of wishful thinking. The result is a generalization, which brings to mind that “All generalizations are false, including this one.” The finished product with its misnomer, “normal range,” carries an implicit invitation to conclude that if a patient’s value falls within the range he is healthy, and that if it lies outside the range he is not healthy and some action is needed. It is also implied that this same range will serve for the banker and the builder, the poet and the football star, the old and the young, the thin female and the obese male, the apparently well and those with serious concurrent disease.[26]

And yet, even the more generous range is problematic. There is almost no quantitative difference between a p-value of 0.049 and 0.051, and yet the qualitative distance is often night-and-day. Likewise, arbitrarily imposing a strict cutoff when none exists in reality is simply begging for trouble. And to be fair to the Endocrine Society, their standards stress they should only be applied to "men with signs and symptoms consistent with testosterone deficiency and unequivocally and consistently low serum concentrations of testosterone measured by a reliable assay."[27] That citation to Elveback was in one of their studies.[25]

Bayesian statistics makes it easy to gradually accumulate evidence from many weak tests. It encourages us to avoid hunting for one "definitive" test, and to include as much background/prior information as we can. But it is easy to miss that if you are handed artificial examples that encourage you to shut up and calculate, rather than think about what and why you are calculating.

Critiques[edit]

  • Subjectivity
  • Need for priors
  • Impractical to use

Connection to rationality[edit]

  • Evidence for Bayesian thinking: socks and Crocs, aka. "that" dress

Usage by cranks[edit]

  • Priors as free parameters
  • Bayesian proofs for god

History[edit]

Inverse probability[edit]

  • Prehistory: Descartes, Bernouilli, de Morgan
  • Richard Price and Rev. Bayes
  • Pierre-Simone Laplace
  • Why "Inverse probability?"
  • Objective vs. Subjective probability

Divergent interpretations[edit]

  • Fisher's rejection of inverse probability
  • Objective branch (Jeffries, Jaynes, Cox)
  • Subjective branch (Jeffries, Keynes, de Finetti)

Bayesian renaissance[edit]

  • Savage, Raiffa/Schlaifer
  • Ulam/Metropolis, Gibbs, Neal (HMC)
  • Machine learning, neural nets

See also[edit]


External links[edit]

Our favourite transhumanist's explanation of Bayes' Theorem

References[edit]

[1] [2] [28] [9] [10] [8] [12] [13] [14] [16] [17] [18] [15] [20] [21] [22] [25]


  1. 1.0 1.1 Stephen E. Fienberg. “When Did Bayesian Inference Become ‘Bayesian’?,” Bayesian Analysis 1, no. 1 (March 2006): 1–40.
  2. 2.0 2.1 Pierre-Simon Laplace. "Mémoire sur la probabilité de causes par les évenements." Mémoire de l'académie royale des sciences (1774)
  3. Laplace, Pierre-Simone. A Philosophical Essay on Probabilities, 1819; trans. Frederick Lincoln Emory and Frederick Wilson Truscott, 1909, pg 6-7.
  4. Jeffreys, Harold. Theory Of Probability, 1948, pg. 15
  5. Neyman, Jerzy. "Frequentist probability and frequentist statistics." Synthese 36.1, 1977, pg. 99
  6. Laplace (1819), pg. 15-16
  7. This should always happen, why would you waste time working out the math on impossible hypotheses?!
  8. 8.0 8.1 Steven N. MacEachern. "Nonparametric Bayesian methods: a gentle introduction and overview." Communications for Statistical Applications and Methods 23.6 (2016): 445-466.
  9. 9.0 9.1 Bas JK Kleijn, and Aad W. van der Vaart. "Misspecification in infinite-dimensional Bayesian statistics." (2006): 837-877.
  10. 10.0 10.1 Jerzy Neyman. "Basic ideas and some recent results of the theory of testing statistical hypotheses." Journal of the Royal statistical society 105.4 (1942): pg. 296.
  11. An amusing side effect is that the cardinality of the set of hypotheses for the coin flip example is strictly greater than the cardinality of the set of real numbers, which should really bake your noodle.
  12. 12.0 12.1 Richard D. Morey, Jan-Willem Romeijn, and Jeffrey N. Rouder, “The Philosophy of Bayes Factors and the Quantification of Statistical Evidence,” Journal of Mathematical Psychology 72 (2016): 6–18.
  13. 13.0 13.1 13.2 Daryl J. Bem. "Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect." Journal of personality and social psychology 100.3 (2011)
  14. 14.0 14.1 Sean Carroll. "Physics and the Immortality of the Soul," Preposterous Universe (blog), (2011-05-23)
  15. 15.0 15.1 Homer H. Dubs. "The principle of insufficient reason." Philosophy of Science 9.2 (1942): 123-131.
  16. 16.0 16.1 Jacob Cohen. "The earth is round (p <. 05)." American psychologist 49.12 (1994). pg 1000.
  17. 17.0 17.1 Irving J. Good. "Studies in the history of probability and statistics. XXXVII AM Turing's statistical work in World War II." Biometrika (1979), pg. 394.
  18. 18.0 18.1 Matthew PA Clark and Brian D. Westerberg. "How random is the toss of a coin?." Cmaj 181.12 (2009): E306-E308.
  19. For the most part, Bem used one-tailed p-values in the paper. Slight problem: his null hypothesis was the nil hypothesis , which is two-tailed. The latter are used here, which leads to larger values than in the paper, but this doesn't effect the eventual outcome.
  20. 20.0 20.1 D. V. Lindley. “A Statistical Paradox.” Biometrika, vol. 44, no. 1/2, 1957, pp. 187–92. JSTOR.
  21. 21.0 21.1 Ulrich Schimmack. "My email correspondence with Daryl J. Bem about the data for his “Feeling the Future” article." Replicability-Index (blog), January 20th, 2018.
  22. 22.0 22.1 Beatriz Böger et al., “[https://linkinghub.elsevier.com/retrieve/pii/S0196655320306933 Systematic Review with Meta-Analysis of the Accuracy of Diagnostic Tests for COVID-19=,” American Journal of Infection Control 49, no. 1 (2021): 21–29.
  23. 23.0 23.1 Alex Zhu et al., “What Is a Normal Testosterone Level for Young Men? Rethinking the 300 Ng/DL Cutoff for Testosterone Deficiency in Men 20-44 Years Old,” Journal of Urology 208, no. 6 (2022): 1295–1302.
  24. Human Rights Watch. Sex Testing Rules Harm Women Athletes, news release from March 31st, 2023.
  25. 25.0 25.1 25.2 Thomas G. Travison et al., “Harmonized Reference Ranges for Circulating Testosterone Levels in Men of Four Cohort Studies in the United States and Europe,” The Journal of Clinical Endocrinology & Metabolism 102, no. 4 (April 1, 2017): 1161–1173.
  26. Lila Elveback, “The Population of Healthy Persons as a Source of Reference Information,” Human Pathology 4, no. 1 (1973): 9–16.
  27. Ezgi Caliskan Guzelce et al., "Accurate Measurement of Total and Free Testosterone Levels for the Diagnosis of Androgen Disorders," Best Practice & Research Clinical Endocrinology and Metabolism 36, no. 4 (2022): 101683.
  28. Harold Jeffreys. Theory Of Probability, 1948.