Now we'll see how matrices can have aspects that can move you beyond just solving linear systems. The determinant concept is kind of odd at first. You'll probably have to come back to it a few times to figure out just what's going on, but its usefulness should be obvious after reading through this section.
The determinant concept will also allow us to develop another method for solving linear systems. It's called Cramer's rule, and we'll go through a few examples in this section.
where a1, a2, b1, and b2 are the numerical coefficients of the variables x and y. Here is that system in matrix form:
Now if we solve each equation for y, we get these two equations, each from one of the original linear equations:
If each right-hand side is equal to y, then each must be equal to the other (transitive property):
Now cross multiply (or multiply both sides by b1b2):
Then group terms containing x on the left side:
Pull the x out as a factor:
And finally, by dividing by (a1b2 - b1a2), we have an expression for x in terms of the coefficients of the two equations (or the components of the coefficient matrix):
We could go through a similar process (and I suggest that you do) to get an expression for y:
Now the thing to notice about these two expressions is that the denominators (a1b2 - a2b1) are the same and that the terms of the denominator all come from the coefficient matrix. Furthermore, if a1b2 - a2b1 is zero, the system can't have a solution because we can't divide by zero.
So it turns out that this number a1b2 - a2b1 is quite important, important enough to get a name, the determinant. And it turns out that this feature is true of matrices of any size: If the determinant of a coefficient matrix is zero, the system has no solution. Another way of saying that is: The equations are not linearly independent.
If the determinant of a matrix is zero, then the linear system of equations it represents has no solution. In other words, the system of equations contains at least two equations that are not linearly independent.
The determinant of a 2 × 2 matrix is the difference of the products along its two diagonals. It looks like this:
In the top example (right →), the linear system represented by the matrix is not linearly independent because one row can be formed by applying a linear operation on the other. For example, if the top row is multiplied by -1, the result would be the bottom row. There is really only one piece of information there to find two variables, so the system can't be solved, and this is indicated by the zero determinant.
The other two matrices can be solved because their determinants are not zero.
We've already shown that the identical denominators are the determinant of the matrix of coefficients for this system. The numerators can be written as determinants of 2 × 2 matrices, too.
Those matrices consist of the coefficient matrix with the first column substituted with the result vector (c1, c2) for the x-coordinate, and with (c1, c2) substituted for the second column for the y-coordinate. Here's a picture:
You should confirm for yourself that the determinants in the denominators of those expressions give you the numerators of our x and y expressions.
For the matrix equation Ax = b, where A is an n × n square matrix of coefficients and x and b are n-dimensional column vectors, the components of x = (x1, x2, x3, ... xn) are given by
where Ai is the matrix A in which column i is substituted with the column vector b.
To do that, we'll need to figure out how to find determinants of larger matrices, because the multiplication of diagonals trick only works for 2 × 2 matrices.
Here is the formula for the determinant of any 3 × 3 matrix. It looks daunting, but don't worry, there's an easy way to do it that doesn't require memorization of a formula.
The way to remember this method is illustrated below. First, rewrite the first two columns of the matrix, (a, e, i) and (b, f, j), to the right of the matrix. Next, draw in the red diagonals as shown below:
The first three terms of the determinant are the products of the elements along all of these red diagonals: afk + bgi + cej. Then draw in the blue diagonals in the opposite direction (lower left to upper right), as shown. The next three terms are the products along those diagonals subtracted from the first part: -ifc - jga - keb. It's simple.
In this example, the determinant is zero, meaning that the system of 3 equations and 3 unknowns represented by this matrix can't be solved. Notice that the third row of the matrix is just the sum of the first two. It's a linear combination of the first two, so it's not linearly independent.
Here are two more examples, just so you can practice finding 3 × 3 determinants:
The latter two matrices represent solvable systems of equations because they have non-zero determinants.
Cramer's rule extends nicely to 3 × 3 and larger square matrices. Let's use it to solve a 3 × 3 system:
Now the key to doing these problems by hand is to take your time and stay organized. First, we'll find the determinant of the coefficient matrix, which we'll call A. It will be the denominator in our expressions for x, y and z.
Now the determinants of the matrices substituted with the result vector (4, -2, 9) (pink highlights):
Now we just do the divisions and arrive at the solution:
(x, y, z) = (1, 2, 3)
1. The determinant of a diagonal matrix is the product of the elements along the diagonal.
This is pretty easy to see using a 3 × 3 or 2 × 2 matrix. For the 3 × 3, all of the other elements of the determinant expression except the first (abc in this case) are zero.
2. The determinant of an upper-triangular or lower-triangular matrix is the product of the elements on the diagonal.
I won't try to prove this for all matrices, but it's easy to see for a 3 × 3 matrix:
The determinant is
adf + be(0) + c(0)(0) - (0)dc - (0)ea - f(0)b
= adf, the product of the elements along the main diagonal. Likewise, the determinant of this lower-triangular matrix is acf
This property means that if we can manipulate a matrix into upper- or lower-triangular form, we can easily find its determinant, even for a large matrix.
3. The determinant of a block-diagonal matrix is the product of the determinants of the blocks.
Recall that a block diagonal matrix is one that has elements only symmetrically distributed in square blocks along the diagonal. Here's one:
The determinant of such a matrix is just the product of the determinants of the five blocks. In order from top to bottom they are 3×3, 2×2, 1×1, 1×1 and 3×3 blocks.
4. The determinant has the same absolute value, but switches sign when either two rows or two columns are swapped.
Here are examples of the effect row and column swapping on the determinant of a 3 × 3 matrix. First we'll write out a generic (no numbers) determinant:
Now after swapping the top two rows, notice that the determinant has only changed sign:
And likewise for swapping two columns.
Note that if, in the first instance, we swapped any other two rows, the sign of the determinant would change again. The same is true for columns. We often want to swap columns or rows in order to make a matrix triangular or block diagonal, to make the determinant easier to calculate.
xaktly.com by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to email@example.com.