The law

The law of large numbers is a very important piece of probability theory and shows up in a great many practical situations. We rely on it to be right all of the time.

Here's an example of how it works: If we calculate the average of all six sides of a die (half a pair of dice), we get (1 + 2 + 3 + 4 + 5 + 6) / 6 = 3.5.

Now of course we won't ever roll a 3.5, so getting that average on a single roll is impossible. It's also unlikely after just a handful of rolls. For example, if we rolled two 6's and a 5, our early average would be about 5.67. But over a large number of rolls, as the graph on the right shows, the average of all rolls will get closer and closer to our "expected average" of 3.5.

I simulated 200 rolls of a single die in this graph four times (each a different color). While the averages vary wildly at first, they steadily approach the expected average of 3.5 after 100 rolls or so.

The law of large numbers

Given enough trials, the average value of any random event will approach its calculated average. We can think of it this way: Things that are essentially random by nature can become quite predictable if we observe them for long enough.

Examples

Here are a few examples of situations in which we rely on the law of large numbers to be true.

1. Diffusion

Imagine putting a drop of ink in a beaker of clear water. You've probably seen this before. The ink slowly spreads out until it uniformly colors the liquid.

The ink spreads because of random collisions between and among ink and water molecules.The cartoon below might help explain the idea.

On the top-left are two boxes separated by a barrier (thick black line). The barrier separates small red atoms (maybe those can be ink) from large blue ones.

When the barrier is removed, the random motions of the particles (atoms are always in motion) cause them to mix, eventually coming to a state (panel 4) of being completely mixed. But with such small numbers, it's possible for fluctuations to occur that would cause small imbalances (more red on one side than another, for example) to occur. For example, for small numbers, panel 4 could conceivably– and momentarily–turn back into panel 3.

On the other hand, if we have a very large number of atoms – moles, let's say, then we observe a smooth diffusion of each side into

the other, and we never see any fluctuations that would "unmix" the atoms; the sample is just too big for that to happen.

2. Insurance

Insurance companies rely in having large numbers of people in their "pools" to even out risk and make their business predictable—something that's good for everybody.

Imagine you have an insurance company. You charge money to provide a benefit to your clients if they are involved in a certain kind of accident. Let's say that it's well known that the probability of having this kind of accident is one in 10,000 people per year.

If you have only a few clients, it's entirely conceivable that two might have an accident in same year, and that would ruin your company. But if you have 10,000 clients, then on average you'll be paying one benefit per year (some years more, some less, of course), which could conceivably be covered by the money you take in.

Large numbers of clients ensure that the risk is as close to the known odds as possible. It's a very important concept to understand for both insurers and people who make public policy about insurance.

3. Political and other polling

We see political polls and take surveys of all kinds all of the time. While most surveys have big reliability issues, polling by random sampling can be pretty reliable because of the law of large numbers.

It turns out that a sample of only 1000 people, as long as it's chosen randomly, is sufficient to predict the outcome of a presidential race to within about ±3%.

And these things have a diminishing return. By doubling the number of samples from 1000 to 2000, we don't double our accuracy to ±1.5%. The gain scales as the square root of the number of extra samples, so we'd only gain a factor of about 1.4 (square root of 2), not 2. And adding more samples would cost more, as well.

The law of large numbers says that if we have enough samples, the average we get will converge to the "actual" underlying value.

Of course, we see polls change and have different outcomes all the time, but remember that those usually change over time as the attitudes of the voters change, and not all polls ask the same questions.

This graph shows polling data taken between Oct. 2011 and Sept. 2012, just before the presidential election between President Obama and Mitt Romney. Notice that while Obama led most of the way, the difference between the two was mostly within the margin of polling error. The race turned out roughly as the polls predicted it would. Mr. Obama won 51.1% of the popular vote to Mr. Romney's 47.2%. (Source: Real Clear Politics).


Creative Commons License   optimized for firefox
xaktly.com by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to jeff.cruzan@verizon.net.