back to math button

home button

The language of matrix algebra

In this section we'll learn the basic terminology and definitions used in matrix algebra — how to speak "matrix-ese." You'll develop a few other, deeper concepts as you continue to study the subject, and these will have their own terms and definitions, but these are the absolute foundational ideas. Learn them well to lay a good base of knowledge for later. The basics always matter the most!

We'll start with matrices (plural of matrix). Take a look at the diagram below. A matrix is just a rectangular array of numbers. We refer to each number in a matrix as a matrix element. .

We locate each matrix element by the row and column (in that order) in which it resides in the matrix. Rows are written horizontally, and columns vertically. For example, the element in the 4th row and 5th column might be x45 (or a45 or b45 – it's the "45" that matters; the letter is just a place holder).

We also describe a matrix by the number of rows and columns (in that order) that it contains. A 7×8 matrix, for example, would contain 7 rows of 8 numbers, or 8 columns of 7 numbers. Here's the anatomy of a 3×3 matrix:

Look at some other examples of matrices below. While the most useful matrices are often square (n × n), matrices don't have to be square. Study these matrices to make sure you're pretty familiar with how to label matrix elements with their row & column subscripts.

The matrix on the right uses some shorthand notation (the . . . ) that you'll probably see in some proofs, theorems or procedures about matrices. It's worth taking some time to get used to what it means. You don't want to be writing all 100 elements of a 10 × 10 matrix out by hand, for example.

Vectors   are 1 × n or m × 1 (one-dimensional) matrices

You will probably do very little in matrix algebra (or physics) without using vectors, so it's just essential to understand them. You might also want to read about vectors here. A vector is just a one-dimensional matrix, and it can be written as a single row of numbers or a single column.

A vector is often written to locate a point in some space. I say "some space," because we're moving toward generalizing from 3-dimensional space to more dimensions. It's very difficult to conceive of 4, 5 or 100 dimensional spaces in our heads, but that doesn't mean they aren't mathematically valid—and very useful in very practical ways.

Any ordered pair (x, y) of coordinates on the Cartesian plane is actually just a row vector pointing from the origin (0, 0) to the point of interest.

Likewise, the location of a 3-D point can be written as a 3-D vector (x, y, z), and so on. (Sometimes we call 3D space "xyz-space." Here is an example of a point plotted on a 3-D coordinate system.

Whether a vector is written as a row or a column is a matter of which one is more convenient, as you will see in what's ahead.

Bear in mind that dimensions might take on many forms. For example, we could consider the motion of the three atoms of a water molecule in nine dimensions: three for the translation of the molecule along the x-, y- and z-axes, three for the rotation of the molecule about each one of these three axes (any complicated rotational motion can be expressed as a little bit of rotation about each of the main axes, as long as they pass through the center of mass), and three coordinates to represent the vibration (stretching and bending) of the bonds of the molecule. We'd represent that with a nine-dimensional row or column vector.

An n-dimensional vector is a (1 × n) or (n × 1) matrix. The same vector can be written as a column (n × 1) or a row (1 × n) with the same meaning. A vector is the location, in some n-dimensional space, of a point.

Matrices are made of row vectors and column vectors

We can actually now say that any m × n matrix is actually just a list of n m-dimensional column vectors, or a column of m n-dimensional row vectors, like this:

Special matrices

Here is a list of a few special matrices with which you should be familiar.

Diagonal matrix (square only)

A diagonal matrix is one that has non-zero elements only on the diagonal that extends from element a11 to element ann (upper-left to lower-right) where n is the square dimension of the matrix. We don't define diagnonality for non-square matrices. As you will see later, a diagonal matrix represents a solved system of linear equations.

An acceptable shorthand when dealing with matrices like this is just to write large zeros to represent all of the off-diagonal zero elements, like this:

The identity matrix, I

The identity matrix is to matrices what the integer 1 is in arithmetic. It consists of a diagonal of all ones, with every other element equal to zero. Multiplication of a matrix or a vector (covered in another section) by the identity matrix produces no change. In other words, if we multiply matrix A by matrix I, we get matrix A: AยทI = A. The identity matrix is a very important one — and it's very easy to remember.

Upper-triangular matrices

An upper triangular matrix has all zeros below the diagonal. It may or may not have a few zeros on or above the diagonal. It can also be written using the "big zero" notation above. Manipulating a matrix into upper-triangular or lower-triangular form (below) is often a goal.

Lower-triangular matrices

A lower-triangular matrix is just the opposite of an upper-triangular matrix.

Block-diagonal matrices

Block-diagonal matrices are those in which the only non-zero elements are symmetrically located along the main diagonal. In the sample here, a could be any number (even zero).

A matrix with so many zeros like the one above, regardless of whether it's diagonal, is sometimes referred to as a sparse matrix. Sparse matrices are nice to use in calculations that involve very large matrices (dimensions in the thousands or higher) because multiplying by zero is fast compared to multiplying nonzero numbers, say on a computer.

There are many other kinds of matrices and much more matrix terminology, but much of it wouldn't make any sense here without context, so we'll cover that as we go into matrix algebra in more detail in other sections.

The transpose of a matrix

The transpose of a matrix is just a rewriting that converts the first row into the first column of the new matrix, the second row to the second column, and so on.

Here is an example (right) using a 3×4 matrix. Notice that even through this pretty drastic rearrangement, the diagonal elements don't change. They are said to be invariant to the transpose operation.

Creative Commons License   optimized for firefox by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to