Counting on Monsters

28 September 2010

While we’re on the subject of books, here’s a book for the smaller mathematicians in your life.  The ones who can’t necessarily spell “mathematician”.

The book is You Can Count on Monsters by Richard Evan Schwartz (ISBN 1568815786), and it’s a picture book about prime number decompositions.  It’s much more colorful than you’re picturing, I promise.

One thing that makes this book so nice is that it doesn’t beat you over the head with anything.  There are some very basic remarks about primes and multiplication at the beginning  and some slightly deeper remarks at the very end, but the vast bulk of the pages have no text at all.

Each number from 1 to 100 gets a double page.  On the left is the the number and a configuration of that many dots (usually clustered into spirals or some such in an interesting way).  On the right is a whimsical (and strangely compelling) drawing of a monster.  For a prime number, the monster is some simple monster which smoehow embodies the nature of the number (the right number of teeth, or legs, or whatever).  For a composite number, the monster is in some way a conglomeration of the corresponding prime monsters.  (So the 70-monster incorporates the natures of the 2-, 5-, and 7-monsters.)

There’s plenty to stare at, plenty of patterns sitting right near the surface, and plenty more lurking underneath to be discovered over time.  An artistically-inclined child might try to invent prime monsters larger than 100, or to draw some composite monster.  (I’ll confess that when I bought the book, I entertained myself for quite some time trying to draw the 1001 monster.)  It’s just fun to look at, and it provides lots of interesting avenues for mathematical conversation with “grown-up” mathematicians.

Speaking as a number theorist, I love the way this book conveys the essential predictable/chaotic dual nature of prime numbers.  There are always more monsters, but you’re never quite sure when you’ll meet the next one.

Advertisements

“The Impossible” and “Infinity”: Two Outstanding Books on Math

28 September 2010

When I was at MathFest in Pittsburgh this summer, I bought a pile of math books.  It’s taken a lot of bus rides to get through them all, but now that I’ve read them all, there are several that I want to recommend.

My favorites from the bunch are two books on mathematics for a general audience by John Stillwell.  Stillwell is best known, I believe, for his excellent tome Mathematics and its History, which is an outstanding textbook for a course (or two or three) in the history of mathematics, expertly blending mathematical content, biographical information, and insight into the historical progression.  I can’t say enough good things about that text — it’s one of my favorite books — but it’s not something that a nonmathematician is likely to buy (because it’s so big and correspondingly expensive).

These two books, though, are short, affordable, accessible, and beautifully written.  Each tells a story of mathematicians dealing with a certain big theme, with a logical progression of increasingly sophisticated and deep mathematics.

The first is Yearning for the Impossible: The Surprising Truths of Mathematics (ISBN 156881254X).  Here the theme is the ongoing evolution of mathematics in response to questioning the impossibility of certain ideas.  There is no real solution to x^2=-1, so we could just throw up our hands and say “It’s impossible!”, but the results are much more interesting if we ask “Is there some other sense in which it is possible?”  This book, just over 200 pages, is really remarkable for the number of disparate and sophisticated ideas it manages to introduce to a general audience.  Let me illustrate this by simply listing the chapter headings.

  1. The Irrational
  2. The Imaginary
  3. The Horizon
  4. The Infinitesimal
  5. Curved Space
  6. The Fourth Dimension
  7. The Ideal
  8. Periodic Space
  9. The Infinite

The follow-up is Roads to Infinity: The Mathematics of Truth and Proof (ISBN 1568814666).  The title makes it sound very profound, and it is.  This book goes deep into various notions of infinity, including a friendly but surprisingly thorough treatment of ordinal and cardinal numbers.  Godel’s Theorem(s) made accessible without being dumbed down.  Good stuff.

Here’s the takeaway.  These books are beautiful, they, make me happy, you should go buy them and read them.  Someday I want to be able to write like that.

P.S.

If anyone local wants to borrow any of the books mentioned, just drop me a line or stop by my (Ann Arbor) office.  I have a 2nd edition and a 3rd edition of Mathematics and its History, though I plan to award the 2nd ed to a worthy student at semester’s end.


Similarity and the “Right” Proof of the Pythagorean Theorem

24 September 2010

One day last fall, I was making some notes before a lecture in my Geometry for Teachers class. My then-officemate, Todor Milanov, asked me what I would be talking about. When I said “the Pythagorean Theorem”, he asked me if I was going to prove it. When I said I was, he told me that I was going to prove it “the wrong way”.

Well, this was intriguing. I hadn’t told him which of the various proofs I knew was the one I intended to give. How was he so sure I would do it “the wrong way”? And what did that even mean?

I asked him (a bit skeptically at first) to show me “the right proof”, and now I will show you.

First, “the wrong proof”. Traditionally, the Pythagorean Theorem is phrased in terms of squares.

If you draw three squares, one based on each side of a right triangle, then the combined area of the smaller two squares equals the area of the largest square.

Simple and beautiful visual proofs of this fact are easily found.  (Here or here, for example.)

But we can give an even simpler visual proof of the Pythagorean Theorem, if we step back and understand better what it really says.

Read the rest of this entry »


Link: Math Buskers Juggle Numbers On English Streets

15 August 2010

Well this is interesting.

Math Buskers Juggle Numbers On English Streets : NPR.


Link (with remarks): 20 Moves Or Less Will Solve Rubik’s Cube

14 August 2010

20 Moves Or Less Will Solve Rubik’s Cube : NPR.

This story showed up on Morning Edition yesterday, and I thought it would be worth explaining a little more carefully what it means to “solve” a Rubik’s Cube, and why professional mathematicians would be interested.

This is not a story about puzzles or cubes; it’s a story about groups.  Groups are what mathematicians use to talk about symmetry.  A group is a set of objects (which we can think of as transformations), together with a composition operation which satisfies certain properties, designed formalize our natural notion of symmetry.  Sometimes a group comes naturally with a set of generators, special elements of the group with the property that every element of the group can be built out of those generators and the composition.  The group of symmetries of the Rubik’s cube is generated by the operations of turning the faces of the cube.  (Each of six faces can be turned widdershins or antiwiddershins, so we’ll call that 12 generators.)

If you pick up a Rubik’s cube which has been jumbled up, what you really have is the element of the symmetry group of the Rubik’s cube which represents the transformation from an unmixed cube to what you now see before you.  You know it’s in the symmetry group because your adversary mixed the cube up by turning faces (probably many, many times).  But you weren’t watching her turn the faces, so you don’t know what she did (if you did know, you could just reverse her steps).  All you know what the cube looks like now.  Your job is to think of some sequence of turns that might be what she did, that would have produced the jumble you now hold in your hands; then you do that, backwards.

(Here I’ve been assuming that the cube was mixed by some other person turning the faces in some complicated and/or haphazard way. If you jumble a Rubik’s cube by switching stickers, then you’re evil!  There’s no guarantee that the pattern you made could have been generated by turning faces, so there’s no guarantee .  That is, there’s no guarantee that your transformation is in the Rubik’s cube group.)

This is what a mathematician might see when solving a Rubik’s cube.  Un-jumbling a cube is (in some sense) the same as identifying the transformation used to jumble it and expressing that transformation (really its inverse) in terms of the basic ones (face turns).  Problems like this are important often in mathematicians.  Given a group G, a set of generators g_1, g_2, g_3,\ldots, and a group element x, how can we actually write down g as a composition of the g_i?  In general there will be a lot of such representations, but can we find a simple one?  Is there a simplest one?  Can we find it?  These are not (at least, not always) frivolous or recreational questions.

But hadn’t people already solved the Rubik’s cube problem?  So how is this announced result different than what we’ve known about Rubik’s cubes for years?  It’s much stronger.

  • What we usually mean by “being able to solve a Rubik’s cube” is, given a jumbled cube, figuring out a sequence of moves that will unjumble it.  Not necessarily the shortest sequence of moves.  A mathematician might say they have an efficient process for expressing any Rubik’s cube symmetry in terms of face turns.
  • The new result says that, given a jumbled Rubik’s cube, we (more precisely, their computer) can unjumble it in the minimum possible number of moves.  That is, we can express every symmetry as a product of the minimum number necessary.

P.S.

If you’ve never heard of Looney Labs, the company of the person interviewed here, check out their website.  Several of their games are on my favorites list.

P.P.S.

Sometimes you will see a novelty Rubik’s cube in which the faces of an unjumbled cube are 6 different pictures (of cartoon characters, say) instead of 6 solid colors.  This is actually not the same puzzle as a classic Rubik’s cube; it’s slightly more complex.  Why is that?


Really Big Numbers

10 July 2010

Much is made in popular mathematics writing of the human impulse to contemplate infinity, and even more is made of how counterintuitive the infinite can be.  Cantor’s Hotel, the fact that there are as many counting numbers as there are fractions, and so forth.  But you don’t have to go all the way to infinity to get confused; math is confusing enough “near” infinity, i.e. at really big numbers.  Consider this quotation from distinguished mathematician Ronald Graham.

The trouble with integers is that we have examined only the very small ones.  Maybe all the exciting stuff happens at really big numbers, ones we can’t even begin to think about in any very definite way.  Our brains have evolved to get us out of the rain, find where the berries are, and keep us from getting killed.  Our brains did not evolve to help us grasp really large numbers or to look at things in a hundred thousand dimensions.

I have heard it said (though I don’t remember right now who said it) that humans intuitively perceive numbers much as a person standing in a large meadow perceives distance markers placed, say, at 1-foot intervals.  We see 2 as significantly more than 1, and 10 is a lot more than that.  But it’s hard to compare a million and a billion; they’re both essentially on the horizon.  Indeed, in some ways 3 and 10 can feel further apart than, say, a billion and a trillion.  It’s something like asking a small child whether two stars in the sky are closer together or further apart than, say, her house and her school.  The intuition that comes standard on people is a local thing.  And almost all numbers, like almost all places, are really far away.

Here’s an interesting construction that almost impossible to believe at first, because all the interesting stuff is happening way far out down the number line.

Read the rest of this entry »


Getting a Complete Collection

10 June 2010

This summer, I’m teaching 400-level intro probability from Ross, 8th edition.  Recently we covered a problem from that book which has a special place in my heart.  This is a problem which I made up for myself, and then actually solved, when I was in 7th grade, long before I’d ever seen a book on probability.

The Problem.

Suppose that you wish to collect a complete set of a certain kind of collectible cards. (Insert your favorite kind of trading cards, real or imagined.)  There are a total of n different types of card, and whenever you buy a card it is equally likely to be any of the n types.  If you want to buy cards, one at a time, until you have a complete set, how many cards do you expect to buy?  (This is a problem about expected value in the probability theory sense; follow the link if that’s not in your working vocabulary.)

A direct assault on this problem gets very hard in a hurry.  For example, if you wanted to compute the probability that you will complete a collection of 500 things on your 731st try, you’d find that that probability is excruciatingly difficult to work out.  The beauty is that we don’t need to do anything like that hard!  All we need are two ideas from probability (both of which are important in their own right) and an elegant way of applying them.

Idea 1: Expectation is Additive.

That is, if X and Y are any random quantities whatsoever, then the expected value of X+Y is the sum of the expected values.  When you put it this way, it seems obvious, and in a sense it is ― the proof is very easy.  But it’s also powerful, because it applies to any quantities whatsoever.  You don’t have to understand how X and Y are correlated, you only need their individual expectations.

Idea 2: How Many Times Do I Have to Roll a Die?

Let’s say you are performing a series of independent trials, and each one “succeeds” with probability p.  Then the expected number of attempts required is 1/p.

This should square pretty well with intuition, but it doesn’t go without checking.  In this case the verification takes the form of an infinite series.  Since the probability of needing exactly k attempts is (1-p)^{k-1}p, we have to sum the series \sum_{k=1}^\infty k(1-p)^{k-1}p, which has value p.  This has an elegant solution, but let’s not worry about how to compute this now.  (Exercise for the interested; the rest of you, take my word for it.)

Human intuition about probabilities and expectation is notoriously faulty.  For example, imagine that I am trying to fill up a gallon jug of water.  I do this by pouring in the water from a bunch of other jugs.  Each of these other jugs contains a random amount of water, uniformly chosen from 0 to 1 gallons.  Then each jug has an expected contribution of 1/2 gallon.  So how many jugs do I expect to need to fill my jug?  It’s tempting to say 2, but the true answer is rather higher, closer to 3.  (Actually it’s e.)

Solution.

Here’s the key idea that makes this problem solvable.  Break up the process of collecting into a bunch of separate steps.  Imagine that you throw a little party each time your collection gets bigger (that is, each time you get a card you didn’t already have).  Instead of keeping track of how long it takes to get each particular type of card, keep track of how long it takes to throw each new party.  Let X_i be the number of cards you have to buy to increase your collection from i-1 to i different cards.  That is, if you just got your tenth new card, it will take X_{11} more purchases to get your next nonrepeated card.

Then the total number of cards that you buy is \sum_{k=1}^n X_k, so the expected total number of cards is \sum_{k=1}^n E[X_k].

But each of the X_k is the precisely the situation described in Idea 2!  If you already have k-1 cards, then each additional card will be one you don’t already have with probability \frac{n-k+1}{n}.  So the expectation of X_k is just \frac{n}{n-k+1}, and we’re essentially done.

Answer.

Thus the expected number of cards required to get a complete set is \frac{n}{n}+\frac{n}{n-1}+\frac{n}{n-2}+\cdots+\frac{n}{2}+\frac{n}{1} = n\sum_{k=1}^n\frac{1}{k}.  It’s a basic fact of analysis that this summation, which represents the running totals of the reciprocals of the positive integers, is well approximated by \log n.  So a good estimate for the total expectation is n\log n.

Remark.

This explains why it seems to be harder and harder and harder to make progress in a collection, the further you get.

As an example, suppose you are trying to collect 100 things, and you have 50 different ones right now.  Is it really reasonable to say that you are about halfway there?  At the beginning, you’d expect to need 100\sum_{k=1}^{100}\frac{1}{k} cards, about 519.  Now that you have 50, you only need 100\sum_{k=1}^{50}\frac{1}{k} more, about 450.  So in a meaningful sense, you have 450/519 of your task still ahead of you, about 86.7% (!!!)

So when are you halfway there?  At what point do you expect to need to buy about 259 more cards?  Believe it or not, you aren’t really halfway to collecting all 100 cards until you have 93 of them!