Much is made in popular mathematics writing of the human impulse to contemplate infinity, and even more is made of how counterintuitive the infinite can be. Cantor’s Hotel, the fact that there are as many counting numbers as there are fractions, and so forth. But you don’t have to go all the way to infinity to get confused; math is confusing enough “near” infinity, i.e. at really big numbers. Consider this quotation from distinguished mathematician Ronald Graham.

The trouble with integers is that we have examined only the very small ones. Maybe all the exciting stuff happens at really big numbers, ones we can’t even begin to think about in any very definite way. Our brains have evolved to get us out of the rain, find where the berries are, and keep us from getting killed. Our brains did not evolve to help us grasp really large numbers or to look at things in a hundred thousand dimensions.

I have heard it said (though I don’t remember right now who said it) that humans intuitively perceive numbers much as a person standing in a large meadow perceives distance markers placed, say, at 1-foot intervals. We see 2 as significantly more than 1, and 10 is a lot more than that. But it’s hard to compare a million and a billion; they’re both essentially on the horizon. Indeed, in some ways 3 and 10 can feel further apart than, say, a billion and a trillion. It’s something like asking a small child whether two stars in the sky are closer together or further apart than, say, her house and her school. The intuition that comes standard on people is a local thing. And almost all numbers, like almost all places, are really far away.

Here’s an interesting construction that almost impossible to believe at first, because all the interesting stuff is happening way far out down the number line.

Begin with any positive integer, which we call . Write this number in “hyperbase” 2. (Don’t be surprised if you’ve never heard of hyperbase representations; I’ve heard of them only in this specific context, and that was only after I got my Ph.D.) That is, write the number as a sum of powers of 2 with coefficients less than 2; then repeat the process with the exponents, writing them in base 2, etc. In hyperbase 2, we’d write . In hyperbase 3, 1729 looks like .

As long as is a positive integer, obtain by

- writing in hyperbase
- replacing each occurrence of by to get a number in hyperbase
- subtracting 1

If ever , we stop.

If I start with a very small number, say 2, then nothing all that interesting happens; the numbers collapse to zero rather rapidly.

- ; we get the sequence

It’s another story if we start with larger number, say 7.

- .
- .

As you can see, this is getting out of hand. It’s “obvious” that this sequence, beginning , explodes to infinity.

But like so many obvious statements, the statement I just made is wrong. The truth is that, no matter how large a number you start with, the sequence will terminate at 0. (!!!)

Yes, really. But I don’t advise you to work it out by hand. It takes more steps than any reasonable person would do, even by computer if doing each step requires a separate click or keystroke. In fact, within the bounds of “steps anyone would ever actually do”, the sequence is indeed growing and growing rapidly.

There’s a major qualitative difference between the first billion terms and the genuinely long-term behavior. It’s like one of those pictures where they show you an extremely close-up of a tiny tiny piece of an object, and it looks one way, and then they show you the big picture, and it’s very different. Except, in that story, the big picture is a familiar object; with numbers, the distorted close-up view is all that’s familiar.

Unfortunately, it’s a little bit outside the scope of this blog to actually prove that remarkable claim I made, that the sequence always terminates at 0. You can get the important part of the intuition for what is going on by thinking about a weaker statement for ordinary number bases.

- Start with any number and write it in base 2, e.g.
- Interpret those digits as if they were in base 3, then subtract 1 (leaving the result in base 3), e.g.
- Interpret those digits as if they were in base 4, then subtract 1 (leaving the result in base 4), e.g.
- Interpret those digits as if they were in base 5, then subtract 1 (leaving the result in base 5), e.g.
- etc.

The numbers get bigger in absolute terms, but the representations don’t get any longer. If we sort the sequences like we alphabetize words (from left to right), in what’s called lexicographical order, the representations get smaller and smaller. And since the lexicographical order is a well-order, this can’t go on forever, and eventually we reach 0. (Notice that even though the process can increase any individual digit by an arbitrarily large amount (when we have to carry), this is always accompanied by a decrease in a more significant position.)

The proof of the result involving hyperbase representations is similar; you just have to encode the various digits in all the nested exponents. Done correctly, this is still well-ordered for essentially the same reason.

Tweedcap,

i don’t know why I liked this posting so much – but I do.

Good. Stuff,

Bill

This is quite interesting to me in regards to computing applications. This seems like it could potentially be useful for storing large integers in computers, particularly if the “power” was a power of two.

In your example of x_k where you started with the number seven, for what value of k does the sequence reach a maximum? Is there a way to calculate the x_k with the largest magnitude for an arbitrary starting point? Also, can you write fractions or non-integer rational numbers in the same representation (possibly using negative powers)?

Something else that caught my attention (unrelated) was the number 1729, which if I can remember correctly is the smallest number representable as the sum of two cubes two different ways.