Meta-analysis

From RationalWiki
Jump to navigation Jump to search
Poetry of reality
Science
Icon science.svg
We must know.
We will know.
A view from the
shoulders of giants.

Meta-analysis is a statistical mashup of results from multiple studies. Alternately, a meta-analysis can be thought of as a focused literature review converted into one or more numbers or hypothesis tests.

Essentially, a review article takes a primarily qualitative approach to analyzing the current scientific knowledge while a meta-analysis approaches a similar problem through a quantitative analysis. The former results in a broader, more detailed examination, while the latter can actually more thoroughly test a hypothesis.

Theoretically, a meta-analysis benefits from a larger sample size and, therefore, gives more precise results than any of the donor studies. Like all things statistical, the theoretical benefits of meta-analysis don't always happen. However, when done correctly, a meta-analysis can provide the next best thing to an impractically large single new study.

But, in the end, a meta-analysis is just a fancy form of a weighted mean.

How it's done[edit]

A meta-analysis, done well, follows some version of the following practices:

  1. A very specific definition of the target outcome, subjects (people, crops, mining practices, etc.), and methods.
  2. A thorough literature review to identify all studies that come anywhere close to the criteria above.
  3. Broad inquiries to locate relevant unpublished studies.
  4. Elimination of studies that do not meet a set of pre-defined inclusion criteria for quality standards.
  5. Careful abstraction of study results from the discovered studies.
  6. Conversion of diverse study results into common measure.
  7. Analysis and publication.
  8. Humble speech at Nobel Prize ceremony Enduring the criticism of peers and the public.

Strengths[edit]

Overall, the strength of meta-analysis is a strength in numbers gained by gluing together various pieces of science to form one larger, better(?) chunk of science.

Sample size[edit]

In many cases, it's impractical to gather enough data in any one study to produce a definitive result. People with uncommon diseases are hard to find and it may take years for enough people to get the disease to even be studied. Some well-known diseases, like type 1 diabetes mellitus, are so thoroughly studied that many people willing to participate in a research project have already been enrolled in more trials than they can stand. A meta-analysis allows for the results from multiple too-small studies to be combined as if they were, jointly, one big just-right study.

Precision[edit]

With a larger sample size comes increased precision, both by smoothing out the odd results of several small studies and by reducing the uncertainty of the combined result. Certainly, the precision gained from a meta-analysis does not compare to that of a well-designed large study, but it is better than the precision of any donor study.

Credibility[edit]

Although a single massive study might be better, it could still suffer from the mistakes or perceived flaws of the single study team. A meta-analysis combines results from multiple flawed teams each who hopefully have different flaws. Consistency across the small studies adds credibility to the finding and the combined, meta-analytic result is the best available assessment of the true result. In particular, meta-analyses can help overcome biases as a result of individual studies being limited in where they draw their subjects from, to help produce a more generally applicable result. A meta-analysis can sometimes also resolve conflicting results from different studies, which may be due to chance.

Weaknesses[edit]

While all studies, meta or not, are no stronger than their weakest data, meta-analyses are particularly vulnerable since each component study brings its own particular weaknesses, leaving the combination looking more like a Frankenstein monster than like good science. Sometimes, a series of individual studies are embarked upon with the explicit intention to conduct a later meta-analysis upon them, allowing experimental design to minimise some of these problems.

Publication bias[edit]

Publication bias results in certain sorts of study results being unavailable. The culture of scientific publishing favors studies that reject a null hypothesis and have effects in one direction, favoring novelty.[note 1] A study that shows no difference between treatment and control will often not be published, either rejected by a journal or never submitted. The results themselves could be incorporated into a meta-analysis, but only if the new authors actually know of the unpublished study. The relatively recent drive towards registering each study should reduce this problem over time.

Different study designs[edit]

Early smaller studies of a treatment often examine the impact of a treatment without a control group, as either a one-armed trial or even an observational study. Results from this kind of study are often quite different from randomized controlled trials (RCT) later on. Unrandomized trials often show larger differences between groups, a difference that may have nothing to do with the actual outcome under examination. Comparing a novel treatment at a major teaching hospital to the placebo at a podunk rural hospital is just not fair. Including those results along with RCT results may not be apples and oranges, but it might be apples and rotten moldy wrinkled apples.[note 2]

Different source of subjects[edit]

Even identically designed studies can draw their subjects from different sources, resulting in differences that have nothing to do with the outcome itself. Stoke Mandeville Hospital, a well-reputed hospital in Aylesbury, England, draws spinal cord injury survivors from a specific geographic region of the UK. By comparison, the equally well-regarded Craig Hospital in the Denver, Colorado suburbs, draws local spinal cord injury survivors as well as well-to-do or challenging patients from across the US. Treatment-wise, the two hospitals may generate equivalent quality results, but differences in the respective national health systems and in the representativeness of their patients colors the results of otherwise identical trials.

Lack of original raw data[edit]

Some of the above weaknesses could be addressed if the meta-analysis were conducted using the actual raw data. Study-to-study differences could be controlled for through larger, fancier statistical models. However, these added predictors, even if used only as control variables, start to reduce the advantage of a larger sample size. Furthermore, this is only possible when the control variables are measured across all the component studies. If one donor study enrolls only women, you can be sure that PSA levels[note 3] weren't measured. Even different measurement methods for the same variable can impact results. Blood pressure measured by a machine is more precise, while human-measured BP often has digit preference in its results (values rounded to nearest 5 or 10). Alternately, poorly trained staff in one study could place the BP cuffs incorrectly, systematically altering all results for the component study.

Selection bias[edit]

Selection bias can occur in a meta-analysis when only "good" studies are chosen — perhaps, studies that the researcher just likes better. Studies may be excluded due to professional rivalries or disagreement with study methodology. Such bias in selection naturally affects the meta-analysis results.

Examples[edit]

Cardiology[edit]

In a meta-analysis of electrocardiography, Dorosz compared the ability of two-dimensional and three-dimensional imaging methods to measure volumetric heart outcomes.[1] Medical imaging devices in the US are only required to be assessed for safety and ability to reasonably measure the required outcome. There are few placebo controlled trials.[note 4] As a result, there had been little need for a large trial comparing a cheap, possibly imprecise method (the 2D method) with a more expensive gold standard method that directly measures volume (the 3D method). The respective studies enrolled different mixes of subjects, some older, some younger, some with one disease, others with various diseases. Despite the varying subjects, though, volume is volume. In the end, Dorosz combined results from 22 separate studies of between 29 and 110 subjects to show that 3D is notably more accurate and precise than 2D.

SCI and QOL[edit]

Dijkers performed a meta-analysis to investigate quality of life (QOL) outcomes after spinal cord injury (SCI).[2] Since few SCI studies investigated identical QOL outcomes, Dijkers translated all the various outcomes into a more generic effect size.[note 5] The transformed outcomes allowed Dijkers to assess a broader idea of quality of life than any single study measured, with a sample size no single SCI study could easily reach. The subjects in each of the studies differed somewhat. Some were from veterans groups, others came from specific hospitals, while still others centered around patients of a specific clinical practice. Also, although life satisfaction and happiness may be similar, they are not the same thing, even though the meta-analysis treated them as the same. Still, the resulting meta-analysis discovered modest relationships between injury severity and QOL that were both consistent with the general pattern across the studies, but that was small enough that no single study was large enough to reliably detect it.

MMR vaccine and autism[edit]

For more information, see: Andrew Wakefield

A notable meta-analysis helped fuel the vaccine-autism scare. The authors of the original study claimed to have found a connection between reported adverse events and thimerosal content of the MMR vaccine.[3] The largest flaw, best described by the American Academy of Pediatrics, is that their source data for autism was a modestly reliable source of adverse events (the numerator) and a completely useless source for those who had no adverse events (the denominator) and, thus, couldn't calculate an appropriate relative risk.[4] Furthermore, the combined dataset didn't allow the meta-analysts to directly compare outcomes of those receiving thimerosal-containing vaccines to the thimerosal-free versions. As if that weren't enough, the authors failed to distinguish between ethylmercury and methylmercury (Hint: the 'm' matters).

Interestingly, author Mark Geier (a physician and geneticist) has now started an experimental practice treating children with autism to supplement his main career as a discredited professional vaccine-trial witness.[5]

See also[edit]

Notes[edit]

  1. In other words, if an established treatment performs better than the researcher's new treatment, the study results are more likely to go unreported.
  2. Yeah, you try to make a better analogy! I dare you!
  3. Prostate-specific antigen,Wikipedia something that requires a prostate.
  4. No comparisons were made between electrocardiography and circus weight guessers.
  5. A number that, in this case, describes how many standard deviations one group is from the reference group.

References[edit]

See also[edit]