In this section we'll learn how to **add** (and subtract) and **multiply** matrices and vectors. If you aren't sure about terminology, head back to the matrix definitions page to review it.

We very often need to add or subtract matrices and vectors (mostly vectors), and even more often in linear algebra we multiply matrices and vectors.

We can add (or subtract) two matrices or two vectors, as long as the dimensions of both are the same. We can add an **m × n** matrix to another **m × n** matrix, but not to an **m × n**

matrix, where **p ≠ n**. Likewise, we can add two 4D vectors, but not a 3D and a 4D vector.

First we'll look at vector addition using 3D vectors:

Here is an example. It could'nt be simpler, and hopefully you can see why adding vectors of different dimensions just won't work.

Now we can simply extend this idea to addition or subtraction of matrices:

Here's a specific example:

As usual, subtraction is just addition of the negative, so if you know how to add matrices and vectors, you already know how to subtract them.

For matrices or vectors to be added, they must have the same dimensions.

Matrices and vectors are added or subtraced element by corresponding element.

Remember that a vector is the location of a point in some **n**-dimensional space. I'll pause again here just to say that "**n**-dimensional space" can be confusing at first. But in fact, a system can have just about as many dimensions as you can think of. You just might have to tweak what you mean by "dimension."

When we multiply two vectors, the result is a **scalar**. A scalar is just a number. You can think of a scalar as a number with no location or direction (like a temperature or a speed), and a vector as a number with a location or direction (like a latitude or a velocity).

When two **n**-dimensional vectors are multiplied, we multiply the first element of the first vector by the first element of the second, add that to the product of the second elements, then the product of the third elements, and so on, up to **n** products. This kind of product is referred to as a **dot product** or a **scalar product** (because the result is a scalar). Here's what it looks like:

Here is a look at the dot product between a row vector and a column vector. Notice that in forming the products that form the final sum, we march from left to right in the row vector and from top to bottom in the column vector, matching the next element of each.

Here's an example of the dot product of two 3-D vectors. Notice that it wouldn't make any sense to multiply an **n**-dimensional vector by an **m**-dimensional vector. The number of elements of the vectors must match in order for us to find a dot product.

I'll just give you the bottom line here, and save the explanation for another section, but the dot product of two vectors is related to the angle between those vectors (assuming we translate them to the same origin). Here **|a|** and **|b|** stand for the lengths of the two vectors and **θ _{ab}** is the angle between them.

We frequently need to **multiply** a vector by a matrix. This usually means that the matrix is on the left and the vector (written as a column vector) is on the right.

It might help to begin thinking of a matrix in this situation as an "**operator**," something that alters a vector in a certain way, but is unchanged itself.

This isn't only a good way to think about matrix-vector multiplication, but a handy way to think of a number of concepts you'll encounter later on in math.

Here's a first example. We'll multiply the 3 × 1 vector **b** by the 3 × 3 matrix **A** to make a new 3 × 1 vector. You'll have to follow all of the subscripts as we multiply matrix **A** by vector **b**:

The step-by-step details of how that multiplication was performed are illustrated below, but notice that it's really like taking the dot product of the **m ^{th}** row of the matrix and the vector to get the

Each row of the matrix **A** is multiplied, one element at a time, by the corresponding element of the column vector, and the resulting products are added to give the corresponding row element of the new vector.

(Notice that the result of multiplying a 3×3 matrix with a 3×1 vector is a 3×1 vector.)

Once the first row of the new vector is determined, we move to the second and third rows of the matrix, in turn, to find the second and third elements of the new vector. Notice that each new element of the new 3 × 1 vector is just the **dot product** of the appropriate row of the matrix with the column vector. See if you can follow it:

Here is a real example of multiplication of a column vector by a square matrix. It helps to know before you begin that the product of a 3×3 matrix and a 3×1 vector will be another 3×1 vector. It can be useful to think of the matrix is something that *operates* on a vector to change it into some other vector.

We find the product by building the new vector element-by-element (row-by-row). See if you can follow the steps below to get to the resulting vector, (6, 13, 11).

Multiplying two matrices is just a simple extension of multiplying vectors by matrices if we think of the second matrix as a row of column vectors. The **mn ^{th}** element of the new matrix is the dot product of the

and the **n ^{th}** column of the second, and we simply do this for every combination of rows on the left and columns on the right. Here's what it looks like for a 3×3 matrix multiplied by a 3×2 matrix:

We can't multiply a 3×3 matrix by a 2×3 matrix because as we take what amounts to the dot product of the first row of the 3×3 and the first column of the 2×3, we run out of elements to multiply in the column vector. Here's what that might look like:

Now if we try to form the first (upper left) element of the product matrix, we find a mismatch in the number of elements:

So the dimensions must match. Here's an, easy-to-remember rule:

We can multiply a 4×4 matrix by any matrix or vector with 4 rows, or more generally, we can multiply an **m **× **n** matrix by an **m×p** matrix, where **p** can be any integer. All that matters is that the number of columns on the left match the number of rows on the right. The way that this rule is written above might help you remember it, but in any case, if you try to do the wrong thing, you'll notice it because at some point you'll run out of numbers.

Consider these vectors and matrices:

Find the following sums, differences and products:

1. | ||

2. | ||

3. | ||

4. | ||

5. |

6. | ||

7. | ||

8. | ||

9. | ||

10. |

**xaktly.com** by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2016, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to jeff.cruzan@verizon.net.