We'll start with **matrices** (plural of **matrix**). Take a look at the diagram below. A matrix is just a rectangular array of numbers. We refer to each number in a matrix as a **matrix element**. .

We locate each matrix element by the **row** and **column** (in that order) in which it resides in the matrix. Rows are written horizontally, and columns vertically. For example, the element in the 4^{th} row and 5^{th} column might be **x _{45}** (or

We also describe a matrix by the number of rows and columns (in that order) that it contains. A 7×8 matrix, for example, would contain 7 rows of 8 numbers, or 8 columns of 7 numbers. Here's the anatomy of a 3×3 matrix:

The matrix on the right uses some shorthand notation (the . . . ) that you'll probably see in some proofs, theorems or procedures about matrices. It's worth taking some time to get used to what it means. You don't want to be writing all 100 elements of a 10 × 10 matrix out by hand, for example.

A vector is often written to locate a point in some space. I say "some space," because we're moving toward generalizing from 3-dimensional space to more dimensions. It's very difficult to conceive of 4, 5 or 100 dimensional spaces in our heads, but that doesn't mean they aren't mathematically valid—and very useful in very practical ways.

Any ordered pair (**x**, **y**) of coordinates on the Cartesian plane is actually just a row vector pointing from the origin (0, 0) to the point of interest.

Likewise, the location of a 3-D point can be written as a 3-D vector (x, y, z), and so on. (Sometimes we call 3D space "xyz-space." Here is an example of a point plotted on a 3-D coordinate system.

Whether a vector is written as a row or a column is a matter of which one is more convenient, as you will see in what's ahead.

Bear in mind that *dimensions* might take on many forms. For example, we could consider the motion of the three atoms of a water molecule in nine dimensions: three for the translation of the molecule along the x-, y- and z-axes, three for the rotation of the molecule about each one of these three axes (any complicated rotational motion can be expressed as a little bit of rotation about each of the main axes, as long as they pass through the center of mass), and three coordinates to represent the vibration (stretching and bending) of the bonds of the molecule. We'd represent that with a nine-dimensional row or column vector.

An **n**-dimensional **vector** is a (1 × **n**) or (**n** × 1) matrix. The same vector can be written as a column (**n** × 1) or a row (1 × **n**) with the same meaning. A vector is the location, in some **n**-dimensional space, of a point.

We can actually now say that any **m** × **n** matrix is actually just a list of **n** m-dimensional column vectors, or a column of **m** n-dimensional row vectors, like this:

A diagonal matrix is one that has non-zero elements only on the diagonal that extends from element **a _{11}** to element

An acceptable shorthand when dealing with matrices like this is just to write large zeros to represent all of the off-diagonal zero elements, like this:

The identity matrix is to matrices what the integer 1 is in arithmetic. It consists of a diagonal of all ones, with every other element equal to zero. Multiplication of a matrix or a vector (covered in another section) by the identity matrix produces no change. In other words, if we multiply matrix **A** by matrix **I**, we get matrix **A**: **A**ยท**I** = **A**. The identity matrix is a very important one — and it's very easy to remember.

An upper triangular matrix has all zeros below the diagonal. It may or may not have a few zeros on or above the diagonal. It can also be written using the "big zero" notation above. Manipulating a matrix into upper-triangular or lower-triangular form (below) is often a goal.

A lower-triangular matrix is just the opposite of an upper-triangular matrix.

Block-diagonal matrices are those in which the only non-zero elements are symmetrically located along the main diagonal. In the sample here, **a** could be any number (even zero).

A matrix with so many zeros like the one above, regardless of whether it's diagonal, is sometimes referred to as a **sparse matrix**. Sparse matrices are nice to use in calculations that involve very large matrices (dimensions in the thousands or higher) because multiplying by zero is fast compared to multiplying nonzero numbers, say on a computer.

There are many other kinds of matrices and much more matrix terminology, but much of it wouldn't make any sense here without context, so we'll cover that as we go into matrix algebra in more detail in other sections.

Here is an example (right) using a 3×4 matrix. Notice that even through this pretty drastic rearrangement, the diagonal elements don't change. They are said to be **invariant** to the transpose operation.

**xaktly.com** by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to jeff.cruzan@verizon.net.