We very often need to add or subtract matrices and vectors (mostly vectors), and even more often in linear algebra we multiply matrices and vectors.

First we'll look at vector addition using 3D vectors:

Here is an example. It could'nt be simpler, and hopefully you can see why adding vectors of different dimensions just won't work.

Now we can simply extend this idea to addition or subtraction of matrices:

Here's a specific example:

As usual, subtraction is just addition of the negative, so if you know how to add matrices and vectors, you already know how to subtract them.

For matrices or vectors to be added, they must have the same dimensions

Matrices and vectors are added or subtraced element by corresponding element

When we multiply two vectors, the result is a **scalar**. A scalar is just a number. You can think of a scalar as a number with no location or direction (like a temperature or a speed), and a vector as a number with a location or direction (like a latitude or a velocity).

When two **n**-dimensional vectors are multiplied, we multiply the first element of the first vector by the first element of the second, add that to the product of the second elements, then the product of the third elements, and so on, up to **n** products. This kind of product is referred to as a **dot product** or a **scalar product** (because the result is a scalar). Here's what it looks like:

Here is a look at the dot product between a row vector and a column vector. Notice that in forming the products that form the final sum, we march from left to right in the row vector and from top to bottom in the column vector, matching the next element of each.

Here's an example of the dot product of two 3-D vectors. Notice that it wouldn't make any sense to multiply an **n**-dimensional vector by an **m**-dimensional vector. The number of elements of the vectors must match in order for us to find a dot product.

I'll just give you the bottom line here, and save the explanation for another section, but the dot product of two vectors is related to the angle between those vectors (assuming we translate them to the same origin). Here **|a|** and **|b|** stand for the lengths of the two vectors and **θ _{ab}** is the angle between them.

It might help to begin thinking of a matrix in this situation as an "**operator**," something that alters a vector in a certain way, but is unchanged itself. This isn't only a good way to think about matrix-vector multiplication, but a handy way to think of a number of concepts you'll encounter later on in math.

Here's a first example. We'll multiply the 3 × 1 vector **b** by the 3 × 3 matrix **A** to make a new 3 × 1 vector. You'll have to follow all of the subscripts as we multiply matrix **A** by vector **b**:

The step-by-step details of how that multiplication was performed are illustrated below, but notice that it's really like taking the dot product of the **m ^{th}** row of the matrix and the vector to get the

Each row of the matrix **A** is multiplied, one element at a time, by the corresponding element of the column vector, and the resulting products are added to give the corresponding row element of the new vector. (Notice that the result of multiplying a 3 × 3 matrix with a 3 × 1 vector is a 3 × 1 vector.)

Once the first row of the new vector is determined, we move to the second and third rows of the matrix, in turn, to find the second and third elements of the new vector. Notice that each new element of the new 3 × 1 vector is just the **dot product** of the appropriate row of the matrix with the column vector. See if you can follow it:

We find the product by building the new vector element-by-element (row-by-row). See if you can follow the steps below to get to the resulting vector, (6, 13, 11).

Now if we try to form the first (upper left) element of the product matrix, we find a mismatch in the number of elements:

So the dimensions must match. Here's an, easy-to-remember rule:

We can multiply a 4×4 matrix by any matrix or vector with 4 rows, or more generally, we can multiply an **m **× **n** matrix by an **m** ×** p** matrix, where **p** can be any integer. All that matters is that the number of columns on the left match the number of rows on the right. The way that this rule is written above might help you remember it, but in any case, if you try to do the wrong thing, you'll notice it because at some point you'll run out of numbers.

Consider these vectors and matrices:

Find the following sums, differences and products:

**A·B****A·c****a·b****b·a****b·d****D·C****C·a****D·a****E·a****E·C****B·c**

**xaktly.com** by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to jeff.cruzan@verizon.net.