In any mathematical system that can be used to represent and solve real problems, it's a great advantage to have a multiplicative inverse. For the set of rational numbers, $\{\mathbb{Q}\},$ the multiplicative inverse is simply the reciprocal:
$$x \cdot \frac{1}{x} = \frac{1}{x} \cdot x = 1.$$
Multiplying an element of our set by its inverse yields the identity element, 1. We'd like to have such an inverse for square $(n \times n)$ matrices. For matrix $A,$, we'll call the inverse $A^{-1}.$ Then we have
$$A \cdot A^{-1} = A^{-1}A = I,$$
where $I$ is the identity matrix, the matrix with 1's along the diagonal, zeros elsewhere. It's also true that $A \cdot I = A$ and $A^{-1} \cdot I = A^{-1}.$
One important case where the inverse matrix can be very helpful is in solving systems of equations. If we represent a system as
$$A \cdot \vec{x} = \vec{s},$$
where A is the matrix of coefficients, the vector $\vec{x} = (x, y, z)$ or $(x_1, x_2, x_3, ... x_n)$ is the n-dimensional vector of variables, and $\vec{s}$ is the vector of solutions of the same dimension. When we solve such a system, it's $\vec{x}$ that we seek.
Now if we have the inverse of A, then we can also find $\vec{x}$ this way:
$$ \begin{align} A \cdot \vec{x} &= \vec{s} \\[5pt] A^{-1}\cdot A \vec{x} &= A^{-1}\cdot \vec{s} \\[5pt] \vec{x} &= A^{-1}\cdot \vec{s} \end{align}$$
Let's see how to calculate the inverse of a 3 × 3 matrix, and then apply one in this way to a 3-equation, 3-unknown problem.
We'll begin with a simple 3 × 3 example.
$$Let \; A = \left( \begin{matrix} 1 && 1 && -1 \\ 1 && -1 && 0 \\ 1 && 2 && 1 \end{matrix} \right)$$
To find the inverse of this matrix, we'll first construct an augmented matrix, in which we place our 3 × 3 matrix, $A,$ side-by-side with a 3 × 3 identity matrix, like this:
$$\left( \begin{array}{ccc|ccc} 1 & 1 & -1 & 1 & 0 & 0\\ 1 & -1 & 0 & 0 & 1 & 0\\ 1 & 2 & 1 & 0 & 0 & 1 \end{array} \right)$$
Now the idea is to perform elementary row operations (i.e. Gauss-Jordan elimination) on the left-side 3 × 3 in order to transform it into the identity matrix. The right side will "come along for the ride" through these operations, resulting in the identity matrix in the end. You'll see how it works. Let's begin. First we'll focus on zeroing the lower two 1's of the left-most column vector:
$$\left( \begin{array}{ccc|ccc} 1 & 1 & -1 & 1 & 0 & 0\\ 1 & -1 & 0 & 0 & 1 & 0\\ 1 & 2 & 1 & 0 & 0 & 1 \end{array} \right) \longrightarrow \left( \begin{array}{ccc|ccc} 1 & 1 & -1 & 1 & 0 & 0\\ 0 & -2 & 1 & -1 & 1 & 0\\ 0 & 2 & 1 & -1 & 0 & 1 \end{array} \right) $$
Now we'll keep moving forward. There are a number of choices you could make; I'll take this path:
$$\left( \begin{array}{ccc|ccc} 1 & 1 & -1 & 1 & 0 & 0\\ 1 & -2 & 1 & -1 & 1 & 0\\ 0 & 0 & \frac{5}{2} & -\frac{3}{2} & \frac{1}{2} & 1 \end{array} \right) \longrightarrow \left( \begin{array}{ccc|ccc} 1 & 1 & -1 & 1 & 0 & 0\\ 0 & 1 & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & 0\\ 0 & 0 & 1 & -\frac{3}{5} & \frac{1}{5} & \frac{2}{5} \end{array} \right) $$
Now do two more operations,
$$ \rightarrow \left( \begin{array}{ccc|ccc} 1 & 1 & -1 & 1 & 0 & 0\\ 0 & 1 & 0 & \frac{1}{5} & -\frac{2}{5} & \frac{1}{5}\\ 0 & 0 & 1 & -\frac{3}{5} & \frac{1}{5} & \frac{2}{5} \end{array} \right) \longrightarrow \left( \begin{array}{ccc|ccc} 1 & 1 & 0 & \frac{2}{5} & \frac{1}{5} & \frac{2}{5} \\ 0 & 1 & 0 & \frac{1}{5} & -\frac{2}{5} & \frac{1}{5} \\ 0 & 0 & 1 & -\frac{3}{5} & \frac{1}{5} & \frac{2}{5} \end{array} \right)$$
We can finish by simply subtracting row 2 from row 1:
$$ \rightarrow \left( \begin{array}{ccc|ccc} 1 & 0 & 0 & \frac{1}{5} & \frac{3}{5} & \frac{1}{5} \\ 0 & 1 & 0 & \frac{1}{5} & -\frac{2}{5} & \frac{1}{5}\\ 0 & 0 & 1 & -\frac{3}{5} & \frac{1}{5} & \frac{2}{5} \end{array} \right)$$
So our inverse matrix, $A^{-1},$ is
$$A^{-1} = \left( \begin{matrix} \frac{1}{5} && \frac{3}{5} && \frac{1}{5} \\ \frac{1}{5} && -\frac{2}{5} && \frac{1}{5} \\ -\frac{3}{5} && \frac{1}{5} && \frac{2}{5} \end{matrix} \right)$$
Now we can show that $A \cdot A^{-1} = \bf I$ by multiplying these matrices.
$$\left( \begin{array}{ccc} 1 & 1 & -1 \\ 1 & -1 & 0 \\ 1 & 2 & 1 \end{array} \right) \left( \begin{array}{ccc} \frac{1}{5} & \frac{3}{5} & \frac{1}{5} \\ \frac{1}{5} & -\frac{2}{5} & \frac{1}{5} \\ -\frac{3}{5} & \frac{1}{5} & \frac{2}{5} \end{array} \right) = \left( \begin{array}{ccc} \frac{1}{5} + \frac{1}{5} + \frac{3}{5} & \frac{3}{5} - \frac{2}{5} - \frac{1}{5} & \frac{1}{5} + \frac{1}{5} - \frac{2}{5} \\ \frac{1}{5} - \frac{1}{5} + 0 & \frac{3}{5} + \frac{2}{5} + 0 & \frac{1}{5} - \frac{1}{5} + 0 \\ \frac{1}{5} + \frac{2}{5} -\frac{3}{5} & \frac{3}{5} - \frac{4}{5} + \frac{1}{5} & \frac{1}{5} + \frac{2}{5} + \frac{2}{5} \end{array} \right) $$
The matrix on the right is the identity matrix, so we did indeed find the inverse of matrix $A.$ You can check for yourself that the result is commutative: $A^{-1} \cdot A = \bf I.$ (But remember that not all matrix multiplication is commutative).
No. Not every n × n matrix can be inverted like we did above. Let's look at an example and try to figure out why.
$$Let \; A = \left( \begin{matrix} 1 && 1 && -1 \\ 1 && -1 && 1 \\ 2 && 2 && -2 \end{matrix} \right)$$
Our augmented matrix looks like this:
$$\left( \begin{array}{ccc|ccc} 1 & 1 & -1 & 1 & 0 & 0\\ 1 & -1 & 1 & 0 & 1 & 0\\ 2 & 2 & -2 & 0 & 0 & 1 \end{array} \right)$$
The first manipulations we might make are
$$\left( \begin{array}{ccc|ccc} 1 & 1 & -1 & 1 & 0 & 0\\ 0 & -2 & 2 & -1 & 1 & 0\\ 0 & 0 & 0 & -2 & 0 & 1 \end{array} \right)$$
Notice that our quest for an inverse matrix completely breaks down here because we've zeroed out the last row of the original matrix. The matrix has no inverse. Why?
The rows of a matrix are said to be linearly independent if any one row cannot be obtained through elementary operations (multiplication by a constant, addition & subtraction) on the remaining row(s). Notice that the third row of our matrix in this example is just twice the first row. The third row is dependent on the first, thus this set of equations contains only two pieces of information, insufficient for determining three unknowns.
One way to tell if a matrix represents a solvable system and is invertible is to calculate the determinant. A zero determinant means that a matrix is non linearly independent, and that the system of equations it represents is not solvable. Here's the determinant of our example matrix:
$$det \left( \begin{matrix} 1 && 1 && -1 \\ 1 && -1 && 1 \\ 2 && 2 && -2 \end{matrix} \right)$$
$$ \begin{align} &= 2 + 2 - 2 - (2 + 2 - 2) \\ &= 2 - 2 = 0. \end{align}$$
A matrix of coefficients is linearly independent if no row of coefficients can be obtained by applying elementary operations on any other row(s) of the matrix.
A matrix that has a zero determinant
A. Determine (← see what I did there?) whether the following matrices have an inverse.
1. |
$$\left( \begin{matrix} 1 & -1 \\ -1 & 1 \end{matrix} \right)$$ |
|
2. |
$$\left( \begin{matrix} 1 & 1 \\ -1 & 1 \end{matrix} \right)$$ |
3. |
$$\left( \begin{matrix} 1 & -1 & 1 \\ -1 & 1 & 2 \\ -1 & -1 & 2 \end{matrix} \right)$$ |
|
4. |
$$\left( \begin{matrix} 1 & -1 & 2 \\ 2 & -2 & 1 \\ -2 & 2 & -4 \end{matrix} \right)$$ |
B. Calculate the inverses of these matrices, and show that $A \cdot A^{-1} = I.$
5. |
$$\left( \begin{matrix} 1 & -1 \\ 1 & 1 \end{matrix} \right)$$ |
|
6. |
$$\left( \begin{matrix} 1 & 2 \\ 3 & 5 \end{matrix} \right)$$ |
7. |
$$\left( \begin{matrix} -7 & 1 \\ 2 & 7 \end{matrix} \right)$$ |
|
8. |
$$\left( \begin{matrix} 1 & 1 & 4 \\ 2 & 1 & 1 \\ 1 & 2 & 1 \end{matrix} \right)$$ |
xaktly.com by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2016, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to jeff.cruzan@verizon.net.