Mathematician Georg Cantor introduced the concept of an infinite set. He proceeded to (supposedly) prove there are more real numbers than natural numbers.
But with infinity, we can effectively prove whatever we want to. And so for a bit of fun, let’s assume we understand what infinity is and that infinity does exist. Now we can prove the exact opposite to Cantor.
Consider feeding the number Pi (=3.14159…) into a special function that converts real numbers into natural numbers. The algorithm of our special function proceeds as follows.
If the ‘real’ number is negative, our result will start with a 3, otherwise it will start with a 2. Next we have a string of 1’s, the length of which equates to the part of the real before the decimal point plus 1. For Pi, this means that we will have 3 + 1 = four 1’s. Next we have a string of zeros, the length of which equates to the part of the real after the decimal point, to N decimal places. Thus to represent the number for two decimal places we will need a string of fourteen zeros.
Thus for any value of N, our function will produce a single natural number that uniquely represents that ‘real’ to N decimal places.
This logic is true for all values of N. Therefore if we allow N to increase to infinity, it follows that our function will be able to produce a single natural number representing that real to N decimal places (where N = infinity).
This shows that all reals can be represented by natural numbers that only contain digits of 3 and below. And since we know that natural numbers exist with higher digits in them, clearly there must be more natural numbers than reals!
The usual objection to this logic is that natural numbers cannot contain ‘infinitely many’ digits. But any set of unique (non-repeating) natural numbers, excluding zero, must contain at least one number that is equal-to or greater-than the size of the set. Why does (and how can) this rule suddenly not hold when the set contains ‘infinitely many’ elements?
It seems we can pick and choose which fundamental rules of mathematics suddenly no longer apply where infinity is involved, as long as our choices support the idea that infinity is a valid concept.
All this is, of course, complete nonsense. The first mistake is the assumption that we understand infinity as we do not have a tangible definition that contains logical rigour. In short, we don’t know what we are talking about!
Any proof or argument that involves infinity is inherently flawed because there is no sound mathematical definition for the concept of infinity.
See ‘Investigation of infinity in mathematics’ for more about infinity.