In making any measurement, it's nice to be able to describe the *quality* of that measurement. What we're interested in here is **precision**, the ability of a measurement technique to be reproduced with only small errors.

These graphs represent simulated measurements of a quantity (maybe a length or a mass) with a "true value" of **6**. The green circles are instances of each measurement. In the top graph, four measurements produced a value of 3, three produced a value of 4, and so on. The average of all 25 measurements is 6.12, pretty close to our true value. The number after the ± sign will be know as the **standard deviation**, and it represents the precision of those measurements – how *spread out* they are. More on this later, but it will turn out that about 68% of all of the measurements will fall in the range between 6.12 - 2.09 and 6.12 + 2.09.

The 25 measurements in the lower graph are more tightly clustered about the true value. They are more **precise**. This is represented mathematically by the smaller standard deviation, which shows that 68% of all measurements in this set will lie between 6.08 - 0.81 and 6.08 + 0.81, a more narrow range than the other set.

On this page we'll learn how to calculate the standard deviation (and related **variance**) of sets of data like these.

The standard deviation is a measure of the width of the underlying distribution of measurements. The narrower the distribution, the more tightly clustered the measurements will be about the mean and the more precise those measurements are.

Roll over or tap the image to compare the widths of a precise and less precise distribution.

Mathematically, the standard deviation, usually signified by the Greek letter sigma ($\sigma$) is the average value of the distance of each measurement from the mean. It looks like this:

$\sigma^2$ is the square of the standard deviation, the **variance**.

$N$ total measurements were made.

$x_n$ is the $n$^{th} measurement.

$\bar x$ is the average of all $N$ measurements.

You should study that equation a bit. It isn't as intimidating as it looks. If you don't know about summation notation, the Greek capital letter sigma ( $\Sigma$ ) simply means to add up $N$ things starting from the first one (we count them with the variable $n$, starting at $n = 1$) and ending at the $N$^{th}. There are $N$ total measurements.

What we're summing here is the set of differences or distances between each measurement and the mean value of those measurements. Actually, it's the *square* of those differences so that they're all positive. That number is called the variance, or $\sigma^2$.

The standard deviation is the square root of the variance, $\sigma$.

In calculating $\sigma$, we're interested in the *distance* of each measurement from the mean value, $\bar x$. Imagine a case where we had three measurements, one at the mean, and one each at $±x$, like this:

If we didn't square the differences, one would be -x and the other +x. Adding them to zero (the distance between $\bar x$ and $\bar x$), would give us $\sigma = 0$, which is not the case. Squaring the differences ensures that they are all positive, and that this kind of misleading cancellation can't take place. Taking the square root to get $\sigma$ gets us back to scale with the measurements in the end.

Let's say we have a set of ten measurements, $x_1$ through $x_{10}$, like this:

In order to calculate the standard deviation (and to have it mean something) we need the mean of those measurements, $\bar x$. That's found by adding them up and dividing by the number of measurements, N = 10. In summation notation that looks like:

$$\bar{x} = \frac{1}{10} \sum_{n = 1}^{10} x_n = 2.88$$

It's a good practice to keep some extra digits while doing the calculation, and pare them back to match the number of significant digits in the data once the calculation is complete. That way we avoid some round-off error. Now recall that the variance (the square of $\sigma$) is

$$\sigma^2 = \frac{1}{n} \sum_{n = 1}^N (x_n - \bar{x})^2$$

To help us do the calculation, it's convenient to make a table:

The sum,

$$\sum_{n = 1}^{10} (x_n - \bar{x})^2 = 7.42$$

is just the sum of the right-most column of the table. Then the variance, $\sigma^2$, is just that sum divided by the number of measurements:

$$\sigma^2 = \frac{1}{10} \sum_{n = 1}^{10} (x_n - \bar{x})^2 = 0.742$$

The standard deviation, $\sigma$, is the square root of the variance:

$$\sigma = 0.86$$

And finally, we can report the average and standard deviation like this, rounding to get back to the same number of digits we had in the data:

$$\bar{x} = 2.9 ± 0.9$$

Graphically, the data (green circles) the mean and standard deviation look like this.

The standard deviation tells us that for the data collected, assuming that if enough data were collected the distribution would be **normal** or **Gaussian** (see Central limit theorem), about 2/3 of the measurements would fall in the range 2.9 - 0.9 to 2.9 + 0.9.

The standard deviation is the most-commonly accepted way of describing the precision of measurements.

We must accept that when the number of measurements is small, both an average and its standard deviation has a diminished meaning. If, for example, I want to measure the average height of American females, and I do that by measuring the heights of two women and averaging the results, I hope you wouldn't take that seriously as the average height of ALL women in America. The same is true of the standard deviations calculated from those two data sets.

When the number of measurements is small OR when the sample does not represent an entire population, we customarily divide the sum of squares of $x_n - \bar x$ not by $N$, but by $N-1$

The so-called **sample variance**, $\sigma^2$ is

$$\sigma^2 = \frac{1}{N - 1} \sum_{n = 1}^N (x_n - \bar{x})^2$$

... and the **sample standard deviation** is the square root of that variance.

The $N-1$ in the denominator increases the variance and standard deviation just a little. It accounts for the loss of a **degree of freedom** in the data. That is, that data was already used once to determine one unknown, the mean. Therefore, we only have $N-1$ new and independent measurements left to determine the standard deviation. The sample deviation accounts for that.

Now when $N$ is large, $N \approx N - 1$, so the difference can be negligible. Therefore, it's a good practice to always use $N-1$. In our example 1 above, that would increase $\sigma$ to 0.91, not a significant difference given the precision of our data, but in some situations it can be.

For a sample of a population, or if the number of samples, $N$, is relatively small, the **sample variance** of the mean is

$$\sigma^2 = \frac{1}{N - 1} \, \sum_{n=1}^N (x_n - \bar x)^2$$

When a dataset represents an entire population, or if $N$ is relatively large, the **population variance** is

$$\sigma^2 = \frac{1}{N} \, \sum_{n=1}^N (x_n - \bar x)^2$$

The sample and population **standard deviations** of those are just the square root of the variance. These are referred to as the standard deviation of the mean if we are confident that, given enough measurements, they would be normally distributed.

Find the mean and standard deviation of 20 test scores

Let's say we gave a test and the twenty resulting scores formed the set:

In order to analyze these, it's best to construct a table something like this:

Now the mean of the scores is

$$\bar{x} = 83.85$$

We then sum the squares if distances from the mean of each score:

$$\sum_{n = 1}^{20} (x_n - \bar{x})^2 = 1,326.65$$

The variance will be this number divided by $N-1$, where $N = 20$. We'll use the sample standard deviation (divide by $N-1$) for this small sample.

$$\sigma^2 = \frac{1}{\bf 19} \sum_{n = 1}^{20} (x_n - \bar{x})^2 = 69.82$$

The standard deviation is the square root of the variance:

$$\sigma = 8.36$$

And finally we can report our mean with its associated **standard deviation**:

$$\bar{x} = 83.84 ± 8.4$$

One trick here is knowing just what makes a "small sample." The easy answer is just to remember that for a large sample, the difference between dividing by $N$ vs. $N-1$ is small, particularly after we take a square root. The take home message is just use the $N-1$ definition of the standard deviation.

The meaning of the standard deviation is that, statistically speaking, about 2/3 of all scores did/would have been between 83.8-8.4 = 75.4 and 83.8+8.4 = 92.2. More about that below.

If you haven't had any exposure to calculus, don't worry. You can still understand most of what you know about the standard deviation without this little section..

The standard deviation, σ, is the distance from the mean of our distribution to a set of unique points, inflection points, where the curvature of the graph changes from concave-upward to concave-downward, or the reverse. You may know that inflection points are found by setting the second derivative of a function equal to zero and solving for x. Let's do that for the Gaussian distribution. We'll let $\bar{x} = 0,$ which simplifies the function a bit and just centers the distribution at x = 0.

$$ \begin{align} F'(x) &= \frac{1}{\sigma \sqrt{2\pi}} e^{\frac{-x^2}{2\sigma^2}} \left( \frac{-2x}{2 \sigma^2} \right) \\[5pt] &= \frac{1}{\sigma^3 \sqrt{2 \pi}} x e^{\frac{-x^2}{2\sigma^2}} \end{align}$$

Now the second derivative is

$$ \begin{align} F''(x) &= \frac{1}{\sigma^3\sqrt{2\pi}} \left( e^{\frac{-x^2}{2\sigma^2}} + x e^{\frac{-x^2}{2\sigma^2}} \left( \frac{-2x}{2\sigma^2} \right) \right) \\[5pt] &= \frac{1}{\sigma^3\sqrt{2\pi}} \left( e^{\frac{-x^2}{2\sigma^2}} \left( 1 - \frac{x^2}{\sigma^2} \right) \right) = 0 \end{align}$$

Now set that expression equal to zero to find the inflection points. Notice that the constant term on the left will just divide away into the zero, giving

$$e^{\frac{-x^2}{2\sigma^2}} \left( 1 - \frac{x^2}{ \sigma^2} \right) = 0$$

Now the exponential term is never zero, so the only condition that makes this equation true is if

$$x^2 = \sigma^2,$$

or

$$x = \sigma$$

So σ (actually $\bar{x} ± \sigma$) does indeed give the location of the inflection points.

This unique set of points on our curve ensures that we're always referencing the same width from distribution to distribution.

When the Gaussian or normal probability distribution is divided into standard deviations, we find that the total probability enclosed between $\pm \sigma$ is about 68%, as shown in the graph. That is, we would expect 68% of all measurements approximating the mean to lie between $\pm \sigma$.

Likewise, two standard deviations enclose about 95% of the total probability, and three sigmas enclose about 99.7% of the total probability. In other words, it's very unlikely to find a measurement greater than three standard deviations from the mean or less than $3 \sigma$ from it.

These results come from calculus, as the Gaussian function is an integral-defined function.

These numbers are often referred to as the

rule.

For the most part, standard deviations aren't calculated by hand as they've been in these examples. Calculators often have a statistics mode that can be used to calculate means, standard deviations and other properties of data sets.

Likewise, any spreadsheet program should have a suite of built in functions for calculating statistical properties of all kinds for any data set.

You should become familiar with using both.

Here is some simple Python code that will calculate the standard deviation (both population and sample) of a data set stored in the list variable **x[ ]**. This little program is a great one to write and enhance with different data-analysis features along the way. Soon you'll have a comprehensive 1-dimensional data set analysis tool.

__author__='Doc' #!/usr/local/bin/phython #include "Python.h" import math #For the square-root function x = [88.1, 92.3, 90.9, 97.2, 68.9, 55.3, 94.2, 82.3, 88.7, 94.0, 84.1, 85.3, 80.9, 79.2, 78.9, 20.0, 85.3, 70.7, 72.9, 89.6, 90.2, 93.4, 71.1, 77.8, 65.8, 87.4, 88.3, 92.9, 89.0, 70.6, 58.2, 95.0, 82.3, 79.4, 90.2, 85.5, 80.3, 81.4, 72.7, 84.0] nData = len(x) sum = 0.0 for i in range (0, nData): #Calculate sum sum += x[i] mean = sum/nData sum = 0.0 for i in range (0, nData): sum += (x[i] - mean)**2 #Calculate sum squares of differences variance_sample = sum/(nData - 1) variance_pop = sum / nData standardDeviation_sample = math.sqrt(variance_sample) stardardDeviation_pop = math.sqrt(variance_pop) print('mean = %8.2f +/i %8.2f (sample)' % mean, standardDeviation_sample)) print('mean = %8.2f +/i %8.2f (population)' % mean, standardDeviation_pop)) print('variance = %8.2f' % (variance))

The output of this program would be:

mean = 80.61 +/- 13.80 (sample) mean = 80.61 +/- 13.63 (population) variance = 190.5

X
#### The Greek alphabet

alpha | Α | α |

beta | Β | β |

gamma | Γ | γ |

delta | Δ | δ |

epsilon | Ε | ε |

zeta | Ζ | ζ |

eta | Η | η |

theta | Θ | θ |

iota | Ι | ι |

kappa | Κ | κ |

lambda | Λ | λ |

mu | Μ | μ |

nu | Ν | ν |

xi | Ξ | ξ |

omicron | Ο | ο |

pi | Π | π |

rho | Ρ | ρ |

sigma | Σ | σ |

tau | Τ | τ |

upsilon | Υ | υ |

phi | Φ | φ |

chi | Χ | χ |

psi | Ψ | ψ |

omega | Ω | ω |

**xaktly.com** by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to jeff.cruzan@verizon.net.