Now that you've learned how to find the derivative of most functions, this section and the next (curve sketching) will show you some key **applications**. The first is making **linear approximations** of nonlinear functions.

Making linear approximations (and later **quadratic approximations**) can be very handy for finding high-quality approximations of difficult or complicated functions.

Often, we can vastly simplify a problem without loss of real accuracy by making a linear approximation using calculus.

Later we'll learn to improve our approximations, in some cases to as much precision as we'd like. Converting a function like **f(x) = sin(x)** to a **polynomial approximation** (yes, you will learn to do that) for example, can be invaluable for doing more complicated mathematical modeling.

**When viewed at a sufficiently fine scale, any curve resembles a line**. In the graph below, the function **y = L(x)** is not a bad approximation of **y = f(x)** in the "neighborhood" around **x _{o}**.

If **L(x)** is the derivative of **f(x)** at **x _{o}**, then, recalling that the equation of a line can be found using the point-slope formula,

,

we find

If we agree that our function **f(x)** is approximately equal to **L(x)** near **x _{o}**, then we have:

which is the equation of a tangent line passing through the point **(x _{o}, f(x_{o}))**. We presume that the approximation is better the closer

When viewed at a fine-enough scale, any curve is approximately a line. We can approximate curves using the derivative.

The linear approximation of a function near a point on its graph is simply the equation of the line tangent to the function at that point:

Solution: Let's look at an example in order to better understand what we're getting at. We'll find an approximation of the function **f(x) = ln(x)** around the point **x _{o} = 1**.

From the equation in the green box, we see that we need the value of **f(x _{o})**,

and the value of the derivative at **x _{o}**,

Putting it all together, we have

So the contention is that in the **neighborhood** of **x _{o} = 1**, the function

Here's a graph that will illustrate the point:

As long as we stay near **x = 1**, this approximation is "pretty good." It gets worse as we move away from 1 in either direction along the x-axis, and we'll define "worse", and "good" and "bad" approximations as we go along.

When making approximations, it's always important to evaluate the quality of an approximation. I could approximate my weight at 1 lb., but it wouldn't be a *good* approximation.

It's handy to make our approximations around **x = 0** instead of **x = 1**. That is, we'll set and the value of **x _{o}= 0**. We'll see why a little later, but for now, let's see what affect that has. We'll start out with the definition of the linear approximation from above:

If we insert **x _{o}= 0**, it's considerably simpler:

Now let's calculate linear approximations of the functions **f(x) = sin(x)**, **f(x) = cos(x)** and **f(x) = e ^{x}** because we use those a lot.

We'll need values of **f(0)**, **f'(x)** and **f'(0)**, so let's make a handy table and calculate them, then we'll add the terms to get our approximations.

Our linear approximations are:

Below are graphs of each of these three example functions (**black**) and their approximations near x = 0 (magenta). Notice that (1) very close to x = 0, these approximations are very good and (2) some are better than others. In particular, the linear approximation **sin(x) ≈ x** is very good because the sine function is more-approximately linear over a wider range in that region.

While the approximation of the cosine function is great just near x = 0, the curve is so tight there that it doesn't seem like the approximation will be very good even very near x = 0 ... but that's not very quantitative.

Solution: We'll carry out this example in the same manner, then apply it to a problem.

**f(0) = ln(1) = 0**

**f'(x) = ^{1}/_{(1+x)}, and f'(0) = 1**,

which gives us

**ln(1 + x) ≈ x**.

Now let's see why this might be useful. Suppose we want to know the value of ln(1.1)

Now ln(1.1) = ln(1 + 0.1). That means x = 0.1 in our approximation equation, so ln(1.1) ≈ 0.1.

If I punch in ln(1.1) on my calculator, I get ln(1.1) = 0.0953101798 ... not too shabby.

Let's calculate some other logs a little closer to and a little farther away from zero (that means x a little smaller/larger than zero) and see how we do:

Clearly we get into trouble when we stray too far from x = 0. A 10% error for ln(1.2) might be too much to tolerate in some applications, and a 44% error in ln(2) certainly would be. On the other hand, we got within 2% of ln(1.05).

Solution: We follow the same steps. First calculate f(0) and f'(0):

Plugging these into our expression for the linear approximation, we get

Now let's again calculate some values of **f(x)** both by our approximation and by the calculator, and examine the errors:

We see again that our simple linear approximation of this rather complicated function is pretty good when working with numbers from its domain very close to zero.

A fair question. Why do all this approximating when a calculator or computer will do the work to high precision in just a few minutes?

One problem with functions like the one in the example above is that they take a relatively long time for a computer to compute. They involve a lot of multiplication (roots and exponentiation are performed with several multiplication steps

in a computer), the most time-consuming operation. Now if you've got to perform the same computation millions or billions of times, that can really add up. It may come down to this question: Do I do the calculation using the approximation and accept some small error, or do I go for high precision and never get it done?

Solution: The idea here is that it's difficult (or at least more time-consuming using a computer) to calculate the square of a number like 5.109, but if we could approximate around a convenient base number like **5**, we might find an easy formula for squaring numbers like 5.109, numbers near 5.

We begin with the linear approximation form:

and insert the values we're looking for, using **x _{o} = 5** and

Now **f(5) = 25**, and we can plug everything in to get:

We now have a simple formula for calculating squares of awkward numbers near 5. It's

which is actually pretty easy to do without a calculator.

Now the difference between our approximation and the calculated result is 0.007, or less than 0.1% from the actual value, which is probably pretty good for most applications.

The equation of motion for a pendulum looks like this:

where **g** is the acceleration of gravity (**g** = 9.8 m/s^{2}), **L** is the length of the pendulum string and **θ** is the angle of the pendulum string from the vertical.

This second-order differential equation is difficult to solve because of the sin(θ) on the right. But as we learned in our examples above, for small angles, and to a very good approximation,

Plugging this approximation into our pendulum equation of motion gives

Now this equation is much easier to solve (I'll omit the solution here as it's more advanced and not relevant to this section), and at small angles it's quite accurate.

We'll close out this section by introducing, without any sort of derivation, a formula for adding a *quadratic* term to such an approximation. This is an effort to improve on our linear approximations. We'll apply it to three functions, **f(x) = sin(x)**, **f(x) = cos(x)** and **f(x) = e ^{x},** so that we can compare the results to the linear approximations we derived above.

The second term includes the second derivative - the derivative of the derivative, which we can label **f"(x)**. The formula is:

I won't bother to elaborate on the origin of this new term; that will come later when we learn about series. For now, let's rebuild our table of values of **f(0)** and **f'(0)**, and add **f''(0)** to get our quadratic approximations.

The table below shows that the approximation for the sine function doesn't change upon addition of this new term — its value is zero. That makes some sense because in the derivation of the linear approximation above, the line **y = x** seemed like a pretty darn good approximation for **sin(x)** around zero.

The approximation of **cos(x)** is improved by addition of the term **-x ^{2}/2**, a downward-opening parabola. The graph below (left) shows that new approximation superimposed upon

Finally, the approximation of **e ^{x}** is also improved by addition of an upward-opening parabola. The improvement adds some curvature to our linear approximation that is concave-upward, just like the function.

Here are graphs of the cosine and exponential functions showing how the new quadratic terms in our approximations bend them in the right direction to match the curvature of the function.

This site is a one-person operation. If you can manage it, I'd appreciate anything you could give to help defray the cost of keeping up the domain name, server, search service and my time.

**xaktly.com** by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to jeff.cruzan@verizon.net.