Proof rests on a surprising link between infinity size and the complexity of mathematical theories
From Quanta Magazine ( find original story here).
In a breakthrough that disproves decades of conventional wisdom, two mathematicians have shown that two different variants of infinity are actually the same size. The advance touches on one of the most famous and intractable problems in mathematics: whether there exist infinities between the infinite size of the natural numbers and the larger infinite size of the real numbers.
The problem was first identified over a century ago. At the time, mathematicians knew that “the real numbers are bigger than the natural numbers, but not how much bigger. Is it the next biggest size, or is there a size in between?” said Maryanthe Malliaris of the University of Chicago, co-author of the new work along with Saharon Shelah of the Hebrew University of Jerusalem and Rutgers University.
In their new work, Malliaris and Shelah resolve a related 70-year-old question about whether one infinity (call it p) is smaller than another infinity (call it t). They proved the two are in fact equal, much to the surprise of mathematicians.
“It was certainly my opinion, and the general opinion, that p should be less than t,” Shelah said.
Malliaris and Shelah published their proof last year in the Journal of the American Mathematical Society and were honored this past July with one of the top prizes in the field of set theory. But their work has ramifications far beyond the specific question of how those two infinities are related. It opens an unexpected link between the sizes of infinite sets and a parallel effort to map the complexity of mathematical theories.
The notion of infinity is mind-bending. But the idea that there can be different sizes of infinity? That’s perhaps the most counterintuitive mathematical discovery ever made. It emerges, however, from a matching game even kids could understand.
Suppose you have two groups of objects, or two “sets,” as mathematicians would call them: a set of cars and a set of drivers. If there is exactly one driver for each car, with no empty cars and no drivers left behind, then you know that the number of cars equals the number of drivers (even if you don’t know what that number is).
In the late 19th century, the German mathematician Georg Cantor captured the spirit of this matching strategy in the formal language of mathematics. He proved that two sets have the same size, or “cardinality,” when they can be put into one-to-one correspondence with each other—when there is exactly one driver for every car. Perhaps more surprisingly, he showed that this approach works for infinitely large sets as well.
Consider the natural numbers: 1,2,3 and so on. The set of natural numbers is infinite. But what about the set of just the even numbers, or just the prime numbers? Each of these sets would at first seem to be a smaller subset of the natural numbers. And indeed, over any finite stretch of the number line, there are about half as many even numbers as natural numbers, and still fewer primes.
Yet infinite sets behave differently. Cantor showed that there’s a one-to-one correspondence between the elements of each of these infinite sets.
Because of this, Cantor concluded that all three sets are the same size. Mathematicians call sets of this size “countable,” because you can assign one counting number to each element in each set.
After he established that the sizes of infinite sets can be compared by putting them into one-to-one correspondence with each other, Cantor made an even bigger leap: He proved that some infinite sets are even larger than the set of natural numbers.
Consider the real numbers, which are all the points on the number line. The real numbers are sometimes referred to as the “continuum,” reflecting their continuous nature: There’s no space between one real number and the next. Cantor was able to show that the real numbers can’t be put into a one-to-one correspondence with the natural numbers: Even after you create an infinite list pairing natural numbers with real numbers, it’s always possible to come up with another real number that’s not on your list. Because of this, he concluded that the set of real numbers is larger than the set of natural numbers. Thus, a second kind of infinity was born: the uncountably infinite.
What Cantor couldn’t figure out was whether there exists an intermediate size of infinity—something between the size of the countable natural numbers and the uncountable real numbers. He guessed not, a conjecture now known as the continuum hypothesis.
In 1900, the German mathematician David Hilbert made a list of 23 of the most important problems in mathematics. He put the continuum hypothesis at the top. “It seemed like an obviously urgent question to answer,” Malliaris said.
In the century since, the question has proved itself to be almost uniquely resistant to mathematicians’ best efforts. Do in-between infinities exist? We may never know.
Throughout the first half of the 20th century, mathematicians tried to resolve the continuum hypothesis by studying various infinite sets that appeared in many areas of mathematics. They hoped that by comparing these infinities, they might start to understand the possibly non-empty space between the size of the natural numbers and the size of the real numbers.
Many of the comparisons proved to be hard to draw. In the 1960s, the mathematician Paul Cohen explained why. Cohen developed a method called “forcing” that demonstrated that the continuum hypothesis is independent of the axioms of mathematics—that is, it couldn’t be proved within the framework of set theory. (Cohen’s work complemented work by Kurt Gödel in 1940 that showed that the continuum hypothesis couldn’t be disproved within the usual axioms of mathematics.)
Cohen’s work won him the Fields Medal (one of math’s highest honors) in 1966. Mathematicians subsequently used forcing to resolve many of the comparisons between infinities that had been posed over the previous half-century, showing that these too could not be answered within the framework of set theory. (Specifically, Zermelo-Fraenkel set theory plus the axiom of choice.)
Some problems remained, though, including a question from the 1940s about whether p is equal to t. Both p and t are orders of infinity that quantify the minimum size of collections of subsets of the natural numbers in precise (and seemingly unique) ways.
The details of the two sizes don’t much matter. What’s more important is that mathematicians quickly figured out two things about the sizes of p and t. First, both sets are larger than the natural numbers. Second, p is always less than or equal to t. Therefore, if p is less than t, then p would be an intermediate infinity—something between the size of the natural numbers and the size of the real numbers. The continuum hypothesis would be false.
Mathematicians tended to assume that the relationship between p and t couldn’t be proved within the framework of set theory, but they couldn’t establish the independence of the problem either. The relationship between p and t remained in this undetermined state for decades. When Malliaris and Shelah found a way to solve it, it was only because they were looking for something else.
Around the same time that Paul Cohen was forcing the continuum hypothesis beyond the reach of mathematics, a very different line of work was getting under way in the field of model theory.
For a model theorist, a “theory” is the set of axioms, or rules, that define an area of mathematics. You can think of model theory as a way to classify mathematical theories—an exploration of the source code of mathematics. “I think the reason people are interested in classifying theories is they want to understand what is really causing certain things to happen in very different areas of mathematics,” said H.