Counting on Monsters

28 September 2010

While we’re on the subject of books, here’s a book for the smaller mathematicians in your life.  The ones who can’t necessarily spell “mathematician”.

The book is You Can Count on Monsters by Richard Evan Schwartz (ISBN 1568815786), and it’s a picture book about prime number decompositions.  It’s much more colorful than you’re picturing, I promise.

One thing that makes this book so nice is that it doesn’t beat you over the head with anything.  There are some very basic remarks about primes and multiplication at the beginning  and some slightly deeper remarks at the very end, but the vast bulk of the pages have no text at all.

Each number from 1 to 100 gets a double page.  On the left is the the number and a configuration of that many dots (usually clustered into spirals or some such in an interesting way).  On the right is a whimsical (and strangely compelling) drawing of a monster.  For a prime number, the monster is some simple monster which smoehow embodies the nature of the number (the right number of teeth, or legs, or whatever).  For a composite number, the monster is in some way a conglomeration of the corresponding prime monsters.  (So the 70-monster incorporates the natures of the 2-, 5-, and 7-monsters.)

There’s plenty to stare at, plenty of patterns sitting right near the surface, and plenty more lurking underneath to be discovered over time.  An artistically-inclined child might try to invent prime monsters larger than 100, or to draw some composite monster.  (I’ll confess that when I bought the book, I entertained myself for quite some time trying to draw the 1001 monster.)  It’s just fun to look at, and it provides lots of interesting avenues for mathematical conversation with “grown-up” mathematicians.

Speaking as a number theorist, I love the way this book conveys the essential predictable/chaotic dual nature of prime numbers.  There are always more monsters, but you’re never quite sure when you’ll meet the next one.


“The Impossible” and “Infinity”: Two Outstanding Books on Math

28 September 2010

When I was at MathFest in Pittsburgh this summer, I bought a pile of math books.  It’s taken a lot of bus rides to get through them all, but now that I’ve read them all, there are several that I want to recommend.

My favorites from the bunch are two books on mathematics for a general audience by John Stillwell.  Stillwell is best known, I believe, for his excellent tome Mathematics and its History, which is an outstanding textbook for a course (or two or three) in the history of mathematics, expertly blending mathematical content, biographical information, and insight into the historical progression.  I can’t say enough good things about that text — it’s one of my favorite books — but it’s not something that a nonmathematician is likely to buy (because it’s so big and correspondingly expensive).

These two books, though, are short, affordable, accessible, and beautifully written.  Each tells a story of mathematicians dealing with a certain big theme, with a logical progression of increasingly sophisticated and deep mathematics.

The first is Yearning for the Impossible: The Surprising Truths of Mathematics (ISBN 156881254X).  Here the theme is the ongoing evolution of mathematics in response to questioning the impossibility of certain ideas.  There is no real solution to x^2=-1, so we could just throw up our hands and say “It’s impossible!”, but the results are much more interesting if we ask “Is there some other sense in which it is possible?”  This book, just over 200 pages, is really remarkable for the number of disparate and sophisticated ideas it manages to introduce to a general audience.  Let me illustrate this by simply listing the chapter headings.

  1. The Irrational
  2. The Imaginary
  3. The Horizon
  4. The Infinitesimal
  5. Curved Space
  6. The Fourth Dimension
  7. The Ideal
  8. Periodic Space
  9. The Infinite

The follow-up is Roads to Infinity: The Mathematics of Truth and Proof (ISBN 1568814666).  The title makes it sound very profound, and it is.  This book goes deep into various notions of infinity, including a friendly but surprisingly thorough treatment of ordinal and cardinal numbers.  Godel’s Theorem(s) made accessible without being dumbed down.  Good stuff.

Here’s the takeaway.  These books are beautiful, they, make me happy, you should go buy them and read them.  Someday I want to be able to write like that.

P.S.

If anyone local wants to borrow any of the books mentioned, just drop me a line or stop by my (Ann Arbor) office.  I have a 2nd edition and a 3rd edition of Mathematics and its History, though I plan to award the 2nd ed to a worthy student at semester’s end.


Similarity and the “Right” Proof of the Pythagorean Theorem

24 September 2010

One day last fall, I was making some notes before a lecture in my Geometry for Teachers class. My then-officemate, Todor Milanov, asked me what I would be talking about. When I said “the Pythagorean Theorem”, he asked me if I was going to prove it. When I said I was, he told me that I was going to prove it “the wrong way”.

Well, this was intriguing. I hadn’t told him which of the various proofs I knew was the one I intended to give. How was he so sure I would do it “the wrong way”? And what did that even mean?

I asked him (a bit skeptically at first) to show me “the right proof”, and now I will show you.

First, “the wrong proof”. Traditionally, the Pythagorean Theorem is phrased in terms of squares.

If you draw three squares, one based on each side of a right triangle, then the combined area of the smaller two squares equals the area of the largest square.

Simple and beautiful visual proofs of this fact are easily found.  (Here or here, for example.)

But we can give an even simpler visual proof of the Pythagorean Theorem, if we step back and understand better what it really says.

Read the rest of this entry »


Link: Math Buskers Juggle Numbers On English Streets

15 August 2010

Well this is interesting.

Math Buskers Juggle Numbers On English Streets : NPR.


Link (with remarks): 20 Moves Or Less Will Solve Rubik’s Cube

14 August 2010

20 Moves Or Less Will Solve Rubik’s Cube : NPR.

This story showed up on Morning Edition yesterday, and I thought it would be worth explaining a little more carefully what it means to “solve” a Rubik’s Cube, and why professional mathematicians would be interested.

This is not a story about puzzles or cubes; it’s a story about groups.  Groups are what mathematicians use to talk about symmetry.  A group is a set of objects (which we can think of as transformations), together with a composition operation which satisfies certain properties, designed formalize our natural notion of symmetry.  Sometimes a group comes naturally with a set of generators, special elements of the group with the property that every element of the group can be built out of those generators and the composition.  The group of symmetries of the Rubik’s cube is generated by the operations of turning the faces of the cube.  (Each of six faces can be turned widdershins or antiwiddershins, so we’ll call that 12 generators.)

If you pick up a Rubik’s cube which has been jumbled up, what you really have is the element of the symmetry group of the Rubik’s cube which represents the transformation from an unmixed cube to what you now see before you.  You know it’s in the symmetry group because your adversary mixed the cube up by turning faces (probably many, many times).  But you weren’t watching her turn the faces, so you don’t know what she did (if you did know, you could just reverse her steps).  All you know what the cube looks like now.  Your job is to think of some sequence of turns that might be what she did, that would have produced the jumble you now hold in your hands; then you do that, backwards.

(Here I’ve been assuming that the cube was mixed by some other person turning the faces in some complicated and/or haphazard way. If you jumble a Rubik’s cube by switching stickers, then you’re evil!  There’s no guarantee that the pattern you made could have been generated by turning faces, so there’s no guarantee .  That is, there’s no guarantee that your transformation is in the Rubik’s cube group.)

This is what a mathematician might see when solving a Rubik’s cube.  Un-jumbling a cube is (in some sense) the same as identifying the transformation used to jumble it and expressing that transformation (really its inverse) in terms of the basic ones (face turns).  Problems like this are important often in mathematicians.  Given a group G, a set of generators g_1, g_2, g_3,\ldots, and a group element x, how can we actually write down g as a composition of the g_i?  In general there will be a lot of such representations, but can we find a simple one?  Is there a simplest one?  Can we find it?  These are not (at least, not always) frivolous or recreational questions.

But hadn’t people already solved the Rubik’s cube problem?  So how is this announced result different than what we’ve known about Rubik’s cubes for years?  It’s much stronger.

  • What we usually mean by “being able to solve a Rubik’s cube” is, given a jumbled cube, figuring out a sequence of moves that will unjumble it.  Not necessarily the shortest sequence of moves.  A mathematician might say they have an efficient process for expressing any Rubik’s cube symmetry in terms of face turns.
  • The new result says that, given a jumbled Rubik’s cube, we (more precisely, their computer) can unjumble it in the minimum possible number of moves.  That is, we can express every symmetry as a product of the minimum number necessary.

P.S.

If you’ve never heard of Looney Labs, the company of the person interviewed here, check out their website.  Several of their games are on my favorites list.

P.P.S.

Sometimes you will see a novelty Rubik’s cube in which the faces of an unjumbled cube are 6 different pictures (of cartoon characters, say) instead of 6 solid colors.  This is actually not the same puzzle as a classic Rubik’s cube; it’s slightly more complex.  Why is that?


Really Big Numbers

10 July 2010

Much is made in popular mathematics writing of the human impulse to contemplate infinity, and even more is made of how counterintuitive the infinite can be.  Cantor’s Hotel, the fact that there are as many counting numbers as there are fractions, and so forth.  But you don’t have to go all the way to infinity to get confused; math is confusing enough “near” infinity, i.e. at really big numbers.  Consider this quotation from distinguished mathematician Ronald Graham.

The trouble with integers is that we have examined only the very small ones.  Maybe all the exciting stuff happens at really big numbers, ones we can’t even begin to think about in any very definite way.  Our brains have evolved to get us out of the rain, find where the berries are, and keep us from getting killed.  Our brains did not evolve to help us grasp really large numbers or to look at things in a hundred thousand dimensions.

I have heard it said (though I don’t remember right now who said it) that humans intuitively perceive numbers much as a person standing in a large meadow perceives distance markers placed, say, at 1-foot intervals.  We see 2 as significantly more than 1, and 10 is a lot more than that.  But it’s hard to compare a million and a billion; they’re both essentially on the horizon.  Indeed, in some ways 3 and 10 can feel further apart than, say, a billion and a trillion.  It’s something like asking a small child whether two stars in the sky are closer together or further apart than, say, her house and her school.  The intuition that comes standard on people is a local thing.  And almost all numbers, like almost all places, are really far away.

Here’s an interesting construction that almost impossible to believe at first, because all the interesting stuff is happening way far out down the number line.

Read the rest of this entry »


Getting a Complete Collection

10 June 2010

This summer, I’m teaching 400-level intro probability from Ross, 8th edition.  Recently we covered a problem from that book which has a special place in my heart.  This is a problem which I made up for myself, and then actually solved, when I was in 7th grade, long before I’d ever seen a book on probability.

The Problem.

Suppose that you wish to collect a complete set of a certain kind of collectible cards. (Insert your favorite kind of trading cards, real or imagined.)  There are a total of n different types of card, and whenever you buy a card it is equally likely to be any of the n types.  If you want to buy cards, one at a time, until you have a complete set, how many cards do you expect to buy?  (This is a problem about expected value in the probability theory sense; follow the link if that’s not in your working vocabulary.)

A direct assault on this problem gets very hard in a hurry.  For example, if you wanted to compute the probability that you will complete a collection of 500 things on your 731st try, you’d find that that probability is excruciatingly difficult to work out.  The beauty is that we don’t need to do anything like that hard!  All we need are two ideas from probability (both of which are important in their own right) and an elegant way of applying them.

Idea 1: Expectation is Additive.

That is, if X and Y are any random quantities whatsoever, then the expected value of X+Y is the sum of the expected values.  When you put it this way, it seems obvious, and in a sense it is ― the proof is very easy.  But it’s also powerful, because it applies to any quantities whatsoever.  You don’t have to understand how X and Y are correlated, you only need their individual expectations.

Idea 2: How Many Times Do I Have to Roll a Die?

Let’s say you are performing a series of independent trials, and each one “succeeds” with probability p.  Then the expected number of attempts required is 1/p.

This should square pretty well with intuition, but it doesn’t go without checking.  In this case the verification takes the form of an infinite series.  Since the probability of needing exactly k attempts is (1-p)^{k-1}p, we have to sum the series \sum_{k=1}^\infty k(1-p)^{k-1}p, which has value p.  This has an elegant solution, but let’s not worry about how to compute this now.  (Exercise for the interested; the rest of you, take my word for it.)

Human intuition about probabilities and expectation is notoriously faulty.  For example, imagine that I am trying to fill up a gallon jug of water.  I do this by pouring in the water from a bunch of other jugs.  Each of these other jugs contains a random amount of water, uniformly chosen from 0 to 1 gallons.  Then each jug has an expected contribution of 1/2 gallon.  So how many jugs do I expect to need to fill my jug?  It’s tempting to say 2, but the true answer is rather higher, closer to 3.  (Actually it’s e.)

Solution.

Here’s the key idea that makes this problem solvable.  Break up the process of collecting into a bunch of separate steps.  Imagine that you throw a little party each time your collection gets bigger (that is, each time you get a card you didn’t already have).  Instead of keeping track of how long it takes to get each particular type of card, keep track of how long it takes to throw each new party.  Let X_i be the number of cards you have to buy to increase your collection from i-1 to i different cards.  That is, if you just got your tenth new card, it will take X_{11} more purchases to get your next nonrepeated card.

Then the total number of cards that you buy is \sum_{k=1}^n X_k, so the expected total number of cards is \sum_{k=1}^n E[X_k].

But each of the X_k is the precisely the situation described in Idea 2!  If you already have k-1 cards, then each additional card will be one you don’t already have with probability \frac{n-k+1}{n}.  So the expectation of X_k is just \frac{n}{n-k+1}, and we’re essentially done.

Answer.

Thus the expected number of cards required to get a complete set is \frac{n}{n}+\frac{n}{n-1}+\frac{n}{n-2}+\cdots+\frac{n}{2}+\frac{n}{1} = n\sum_{k=1}^n\frac{1}{k}.  It’s a basic fact of analysis that this summation, which represents the running totals of the reciprocals of the positive integers, is well approximated by \log n.  So a good estimate for the total expectation is n\log n.

Remark.

This explains why it seems to be harder and harder and harder to make progress in a collection, the further you get.

As an example, suppose you are trying to collect 100 things, and you have 50 different ones right now.  Is it really reasonable to say that you are about halfway there?  At the beginning, you’d expect to need 100\sum_{k=1}^{100}\frac{1}{k} cards, about 519.  Now that you have 50, you only need 100\sum_{k=1}^{50}\frac{1}{k} more, about 450.  So in a meaningful sense, you have 450/519 of your task still ahead of you, about 86.7% (!!!)

So when are you halfway there?  At what point do you expect to need to buy about 259 more cards?  Believe it or not, you aren’t really halfway to collecting all 100 cards until you have 93 of them!


Cap takes on the Monty Hall Problem

17 May 2010

I’m teaching probability this semester, and tomorrow I’ll be talking about that most headache-inducing of problems in probability, the Monty Hall problem.  I hadn’t been planning to go there (because the spring term classes are accelerated, a whole semester’s material in seven weeks, so I keep digressions to a minimum), but now I’ve been asked about it by several of my students. I’m writing this today to get my own thoughts in better order, and hopefully to illuminate the issue for you, dear reader.

If you’ve never heard of the Monty Hall problem (also called the Monty Hall paradox), you’re in for a doozy today. It’s infamous. You can read about it on wikipedia, or here, or here, or here, or in video form here (I could go on almost forever), or even on the website of Let’s Make a Deal (the game show which inspired the present form of the puzzle).  You can even play the game yourself here.

I suppose you could just follow a few of those links and be done with it; and if you don’t like my style then you probably should do just that (but then why are you here?). For my explanation, read on.

The right answer is actually not that complicated, but due to a known bug in human intuition, most people get the wrong answer. Moreover those who get the wrong answer have a tendency to be extremely vociferous in defending their answer. I have actually seen fistfights break out over the answer to this problem. So, discuss this problem with your friends and family only if you’re brave.

(I blog not to unite, but to divide?)

The Monty Hall “paradox” is the story of the following game.  All of my games today involve two characters: Alice (the contestant) and Hatter (the host).

Game 1 Script (Original)

There are three doors.  One leads to a car, and the other two lead to goats.  (The locations of the car and the goats are assigned at random in advance, and Hatter is aware of what is where.)

Alice chooses one door (which she hopes leads to the car) but does not open it.

Hatter, knowing where the car is, opens one of the doors that Alice did not choose, showing her a goat.  (That is, if she picked a goat, he’ll show her the other goat; if she picked the car, he’ll arbitrarily choose a goat to show her.)

Now there are two doors, and Alice knows for sure that one leads to a car and one leads to a goat.  Hatter then offers Alice the option to either switch her choice to the other door or stick with the door she had chosen at the start.

Assuming that Alice wants a car and doesn’t want a goat, is it in her interest to take the prize behind her original door, or is it in her interest to take the prize behind the other door, or does it not matter?

The correct answer is that Alice is twice as likely to get a car if she switches as if she stays; most people think that it shouldn’t matter, that the chances of getting the car are 50-50 either way, so it doesn’t matter.  If that was you, you’re in good company.  Read on for an explanation that will, I hope, be both gentle and convincing.

Read the rest of this entry »


Mathematical Fiction: Riot at the Calc Exam and Reality Conditions

23 February 2010

I came home from the Joint Mathematics Meetings in San Francisco with a sack of new math-related books, about a dozen I think. Most were books of actual math, of the sort you probably imagine me reading. Books of problems, small textbooks of one sort or another, expository work, that sort of thing. But two of my new treasures belong to a genre that I didn’t really know existed until I saw them: mathematics-themed fiction.

When I saw Riot at the Calc Exam and Other Mathematically Bent Fiction (written by Colin Adams, published by the American Mathematical Society) and Reality Conditions: Short Mathematical Fiction (written by Alex Kasman, published by the Mathematics Association of America), I greedily snapped them up. I’ve been very busy lately with multiple active research projects, but I’ve been stealing a few minutes here and there, reading a story on the bus once and a while and that, and now I’ve finally guilty-pleasured my way through.

Riot at the Calc Exam is a funny book. The author plays everything for laughs, and there’s a lot to laugh at. “The S.S. Riemann”, a mathematical Titanic story. Self-help for overcoming math anxiety. Personal ads for mathematicians. “The Integral”, a bizarre riff on the Saw genre of movies. And, of course, the title track, the tale of a calculus exam gone very very wrong. It happens that I already knew the author, Colin Adams, as a mathematician (I from time to time do some work in knot theory, his area of expertise), and I consider him one of the funniest mathematicians I have ever met. This book bears that out.

(My favorite line from a Colin Adams lecture I once heard: “…Thistlethwaite, one of the biggest names in knot theory, weighing in at 14 letters…”)

The stories in Kasman’s Reality Conditions are of a different sort. More literary, whatever that means. There’s a story of a mathematician who goes to a desert island in search of the reason why mathematical ideas show up so often in real-life questions and in the inner workings of the universe. A comic-type origin story of Topology Man and Homotopy Girl. A satellite that arrives from a distant alien race and broadcasts cutting-edge math questions. There are murder mysteries and even a horror story (“the object”) of which I dare not say more. There’s humor, sure, but not wacky-zany stuff.   Funny like This American Life can be funny, say, funny-poignant. The title story, of a graduate student who comes just short of mathematical greatness, broke my heart a little bit. Lots of sophisticated mathematics gets thrown around, some of it genuine and some it chalked up to artistic license, but there are very good afternotes to explain which are which.

Riot was written for mathematicians and the people who love them, to have a laugh at all that is wacky in mathematics, while Reality Conditions was written for everyone, but especially I think students of mathematics or interested bystanders, to illuminate a world and a way of thinking which all too often seems very mysterious.

You know how sometimes when you read a book, or watch a play, or hear a song, or see a painting, you can’t help wishing you could do that? I had that feeling with both of these books, for rather different reasons.

Colin Adams’ Riot taps into the intrinsic humor of mathematics education and the nature of the profession. I think there’s a lot more intrinsic humor in mathematics than many people expect. It’s full of a majestic beauty that most people assume they won’t like, full of strange words and strange uses of familiar words that make it seem a foreign language, full of formalism and esoteric conventions; it’s like classical music in that way. Mathematics is still looking for its Victor Borge.

But if I write just one collection of short stories in my life, I would want it to be like Reality Conditions. I want to be doing that, creating art that give mathematicians some perspective on what it is they’re doing and, more importantly, give nonmathematicians some insight into what math is, who studies it and why, and what it has to do with “life”.

I don’t yet know how big the world of mathematical fiction is, or how big it’s going to get, but I wish it well. Mathematics and the things traditionally known as the creative arts (poetry, music, fiction, etc.) might seem disjoint, but in fact there is a lot of interesting interplay just waiting to be explored. Art and math have a lot to say about one another.

The fact that these two collections display so totally different takes on “mathematical fiction” says pretty clearly that there’s a lot of room for more.

I firmly believe that mathematics has a beauty which is distinct from its usefulness. So it should be possible for people to get some appreciation of the soul of mathematics in some way other than a decade of experience as a mathematician. Mathematical fiction in general and Reality Conditions in particular might be some progress on those lines.

Ask the Audience:

If you have read any of these stories, what did you think? I am very interested in how mathematicians and nonmathematicians respond to these pieces. Have you read any other noteworthy mathematical fiction or poetry?

P.S.

I can’t mention Colin Adams without mentioning this classic exchange, attributed to Adams and a calculus student, though I concede that it may be apocryphal.

Student: Professor, what is your favorite kind of math?
Professor: Knot theory.
Student: Me neither!


Bhargava’s Factorials

8 February 2010

Wow. Hard to believe we’re already a week into February. I began my new year with a trip to the 2010 Joint Mathematical Meetings, and I came back with a head and several notebooks full of ideas to contemplate and problems to solve. I’ve missed my little blog here while I’ve been lost in the unexplored corners of the mathemativerse. Sad truth that, until and unless I am tenured, I can’t afford to pass up any opportunities for research; I do have a family to think of. I haven’t been around here for a while, and I’m sorry for that. But today’s post should be a doozy.

Our topic for today comes from this year’s JMM, where the inimitable Manjul Bhargava gave an MAA invited address on factorial functions. (Coincidentally, I first learned about the topic of his talk when I read an article of his, reprinted in the charming book Biscuits of Number Theory, which I bought at the 2009 JMM.)

Wait a minute, factorial functions? That’s what gets talked about at the biggest math conference of the year, where people on the cutting edge of research mathematics get together, factorials? The ones you learned about in high school? Can there really be more to learn about n!?

Amazingly enough, yes. In a conference full of zonotopal algebra and cohomology and harmonic analysis, one of the invited addresses was about factorials. And it turns out that there is more to learn; factorials are really just the tip of an enormous iceberg.

Read the rest of this entry »