Mathematical paradoxes

From RationalWiki
Jump to navigation Jump to search
NOTE: This article or section is (sadly) a serious treatment of the subject matter.

Unlike most of RationalWiki, it lacks sarcasm, satire, or humor in general.

Part of a
convergent series on

Mathematics
Icon math.svg
1+1=11
We are number one


This article/section deals with mathematical concepts appropriate for a student in mid to late high school.


Mathematical paradoxes are statements that run counter to one's intuition, sometimes in simple, playful ways, and sometimes in extremely esoteric and profound ways. It should perhaps come as no surprise that a field with as rich a history as mathematics should have many of them. They range from very simple, everyday common-sense issues, to advanced ones at the frontiers of mathematics. That is why this article has so many "expertise level" boxes above.

Simple everyday paradoxes[edit]

Perhaps the most well-known paradox is the Liar paradox:

The statement below is true.
The statement above is false.

These are sometimes written on two sides of a card ("The statement on the other side is true." / "The statement on the other side is false.") so that one can watch a person turn the card over and over.

The lesson of this one is clear: The fact that a sentence makes a statement doesn't mean that statement is true. (Of course, that's a good thing to be aware of in general.)

This paradox relates to several others, that are actually quite profound, such as the Gödel Incompleteness Theorem. This is because it goes to the heart of what it means for a sentence to "mean" something.

Numeric interpretation paradox[edit]

Consider the number described by the phrase

The square root of thirty-six

Phrases can denote numbers clearly and unambiguously, can't they? Now consider this one

The smallest natural number that cannot be described by a phrase shorter than one hundred letters

The number of phrases shorter than 100 letters is finite. In fact, it can't exceed 27100. (The 27 allows for the placement of spaces between words. Using larger character sets will increase the maximum possible numbers, for example, to 34100 with Russian Cyrillic alphabets) So the set of integers that can be described by such phrases must be finite. (It's actually much less than 27100—"Elementary, my dear Watson" does not denote a number.) But there are an infinite number of natural numbers (positive integers) so it's not hard to see that there must be a first number not in that set. But that number, whatever it is, has been correctly described by the phrase quoted above, which is shorter than 100 letters!

The lessons of this one are

  • Arithmetic, even simple integer arithmetic, can be tricky.
  • What it means for a phrase to "denote" a number is not well defined.

Once again, what it means for something to "mean" something can be very slippery.

Paradoxes of set theory[edit]

Mathematicians' attempts to formalize the subject in a really careful way have led to set theory, which can be an extremely advanced subject. But even a simple everyday notion of set theory leads to this paradox:

The town barber, who is a man, shaves exactly every man in the town who does not shave himself.

So does the barber shave himself, or not?

The lesson of this is that what it means for something to be in a set is tricky, and what it means for a sentence to make a statement about set membership is tricky. Once again, "meaning" (also known as "semantics") comes into question.

A much more advanced paradox along these lines, perhaps the best known of all paradoxes, is the Russell paradox (Bertrand Russell, 1872–1970). When one formalizes mathematics in an incredibly careful way, as Russell, Whitehead and others did in the 1930s, everything is a set. Yes, every mathematical object is a set. And, for any yes/no question you could ask about things, those things for which the answer is "yes" comprise a set, as do those for which the answer is "no".

If one is given free rein to do anything at all in set theory, a set could contain itself. (Though most sets don't.) The Russell paradox concerns

  • The set of all sets which do not contain themselves.

Obviously, there's a problem with this. Does this set contain itself, or not? The lessons are

  • Set theory is really very complicated.
  • We can't allow set theorists free rein to do just anything.

Mathematicians have created the Zermelo-Fraenkel axioms to formalize set theory so that these problems won't arise. In particular, Russell's paradox doesn't arise in Zermelo-Fraenkel set theory because in it one cannot define "the set of all sets" or "the set of all sets satisfying property P" (one can speak of the class of all sets, the Von Neumann universeWikipedia V, but it is not itself a set). Also, by the axiom of regularityWikipedia, no set can contain itself as an element.

Paradoxes of logic[edit]

Mathematicians of the 1930s attempted to formalize logic itself very rigorously, that is, what it means to "prove" a "theorem". Given how hard it is to ascribe "meaning" to "meaning", it isn't surprising that an amazing problem arose: the Gödel Incompleteness Theorem.

First, mathematicians had to establish in a very rigorous and profound way what it means for a proposition to be "true" or "false", and what it means for something to be a "valid proof" of such a thing. Once that was done, the "Holy Grail" of logic came into view: Can we say that, for every proposition whose truth or falsity is well defined, that either

  • It is true, and there is a proof that it is true.

or

  • It is false, and there is a proof that it is false. ?

The search for this "Holy Grail" is not new. In the 1600s, Gottfried Wilhelm Leibniz expressed a desire to be able to answer any question (mathematical or otherwise!) by logical analysis, deduction, and calculation.

It was not to be. The Gödel Incompleteness Theorem says that it is not possible to formulate consistent laws of logic such that, for every well-formed statement, there is either a proof that it is true or a proof that it is false.

This theorem is related to a theorem of computer science: Even under an ideal formulation of what a computer can do, it is not possible for a computer program to be able to examine every other computer program and say with certainty whether any given program will halt or go on forever. This is known as the "halting problem".

Paradoxes of analysis and measure theory[edit]

When one has studied advanced analysis (that is, calculus), topology, and, especially, measure theory, one can see many paradoxical things. Some of these involve the Cantor set and Cantor function. The Cantor set (briefly defined as those real numbers between 0 and 1 whose representation in base 3 contains only 0's and 2's) is an uncountable set of measure zero. While not paradoxical in itself, it is a rather unexpected result. The Cantor function is a function such that f(0) = 0, f(1) = 1, f is continuous everywhere, and f is differentiable, with derivative zero, "almost everywhere", that is, except on a set of measure zero. This would violate the Mean Value Theorem if that theorem actually applied.

The lesson here is that one's intuition about what things should look like can be very misleading at advanced levels.

Measure theory provides some interesting paradoxes. By the time one has studied advanced calculus and topology, one has seen many very strange sets. Measure theory attempts to assign a "geometrical size" to sets. Of course, we want the measure of a simple set to match our intuition—in one dimension the interval [-4, 5] should have a measure of 9. In two dimensions, the set of all points with is the unit circle and has measure , and so on. We want the measure of the union of non-overlapping sets to be the sum of those sets' measures, and so on. Lebesgue measure (rhymes with vague; and the s is silent) is the accepted theory of measure. It is compatible with common intuition for sets that fall under common intuition. But not all sets are Lebesgue-measurable.

In addition to the seemingly paradoxical existence of non-Lebesgue-measurable sets, there are some more serious paradoxes. Perhaps the most famous is the Banach-Tarski paradox. This says that a sphere in 3 dimensions can be divided into a finite number of subsets (specifically, 5) which, without changing their shapes or sizes, can be reassembled into two spheres, each identical to the original one. This of course violates everything we thought we knew about what "measure" is supposed to mean. The way the paradox gets away with this trick is that the sets are not measurable.

The lesson here is that, when dealing with complicated sets such as may arise in advanced analysis, geometrical intuition can fail badly. And that non-measurable sets (a price one must pay for advanced analysis and topology) are particularly suspect.

The axiom of choice[edit]

The axiom of choice (AC) is fairly easy to state at a simple level, but has a very precise meaning that must be understood to grasp what is really meant. Informally it says that given any collection of sets you can always choose an element from each of them with no need to specify which element you're choosing or how. So long as we are only interested in finite sets there is no controversy about this. However for infinite sets there are cases where you actually cannot state how you will make the necessary choice.

For a long time there was a question whether or not it was possible to prove from the uncontroversial axioms of Zermelo-Fraenkel (ZF) set theory. In 1962 it was shown to be independent of ZF so it must be taken as an axiom. The axiom system of ZF with the additional assumption of the axiom of choice is called ZFC.

Constructivist mathematicians (and some others) generally prefer proofs that do not require AC, on the grounds that such proofs are more "clean". However some proofs are significantly cleaner when the AC is assumed, providing elegance and generality for numerous theorems. The remarkable thing about the Axiom of Choice is that both accepting it and rejecting it lead to things that seem like paradoxes. Accepting AC is leads to the Banach-Tarski paradox in measure theory. Rejecting it turns out to break the notion of size in a different way, specifically without AC it isn't possible to prove that if two sets don't have the same number of elements then one of them must have fewer elements.

In practice, it doesn't matter. No use of AC leads to results that are "wrong" in an serious sense. Rather mathematicians find themselves stuck upon the prongs of Morton's Fork. Even when accepting only sensible belief we may find conclusions that seem strange however annoying mystifying that may be.

See also[edit]