The functions which most people are familiar with usually map between the set of real numbers, they are functions of real numbers.

For example the function \( f(x) = 3x + 1 \) takes as input a real number, and outputs a real number. The set from which a function draws its input values is called that functions domain, and the set of values which the function outputs are called that functions co-domain.

Because the function \( f(x) = 3x + 1\) maps between the set of real numbers it can be represented with the following notation \( F: \mathbb{R} \mapsto \mathbb{R} \). This indicate the functions domain and co-domain are the set of real numbers.

For general functions the same notation is used to show the domain and co-domain of a function.

\( F: Domain \mapsto Codomain\)

In linear algebra we deal with vectors and functions of vectors, functions which map between vector spaces.

Just as functions of real numbers can be categorized as being linear, or quadratic so too can functions of vectors. In linear algebra we usually restrict ourselves to functions of vectors which are linear, when a function of vectors is linear it is called a linear transformation.

Linear transformations are generally represented with a capital \(T \) and when they map between real vector spaces they can be described as \( T: \mathbb{R^n} \mapsto \mathbb{R^m} \)

If \(T:V\mapsto W \) is a mapping from a vector space \(V \) to vector space \(W \), then \(T\) is called a **linear transformation** from \(V \) to \(W \) if the following two properties hold for all vectors \(\overrightarrow{u} \) and \(\overrightarrow{v} \) in \(V \), and for all scalars \( k \):

\( T(ku) = kT(u) \)

\(T(u+v) = T(u) + T(v) \)

In the case where \( V = W \), the linear transformation \(T \) is called a linear operator on a vector space \(V \).

When a matrix and vector are multiplied it results in a second vector. This is usually represented in the following way,\( \overrightarrow{v}T = \overrightarrow{w} \) notice the “input vector” \( \overrightarrow{v} \) is written to the left of the linear transformation \( T \) and the "output" of the multiplication is a second vector \( \overrightarrow{w} \). When a matrix and vector are multiplied, the vector is written to the left of the matrix.

The operation of matrix vector multiplication can be viewed as a function \( f(\overrightarrow{v}) \) which takes as input a vector, \( \overrightarrow{v} \) and outputs another vector, \( \overrightarrow{w} \). Written as a function, the linear transformation \( T \) can be written as a function \( f(\overrightarrow{v}) = \overrightarrow{w} \)The reason why the term linear transformation is used and not linear function is because the name is meant to suggest that we view linear transformations as a movement of space.

To visualize the effect that a matrix has on a vector space imagine viewing every vector \( \overrightarrow{v} \) moving to where its mapped by the linear transformation \( \overrightarrow{w} \) . If we visualize vectors using a dot at the coordinates of where their head would lie, then the effects of a matrix on a vector space can be visualized as the movement of a grid of dots/vectors.

A matrix is written as grid of scalars, but a useful way to look at a matrix is as an ordered list of column vectors. If your given a matrix you can determine the transformation it will have on space by examining the columns of the matrix.

The operation of matrix vector multiplication results in a linear combination of the matrices column vectors.

\( \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} \begin{bmatrix} {\color{vred}T_{11}} & {\color{vblue}T_{12}} \\ {\color{vred}T_{21}} & {\color{vblue}T_{22}} \end{bmatrix} = v_1{\color{vred}\begin{bmatrix} T_{11} \\ T_{21} \end{bmatrix}} + v_2{\color{vblue}\begin{bmatrix} T_{12} \\ T_{22} \end{bmatrix}} \)

If these column vectors form a basis then the matrix can be thought of as encoding a change of basis. The vector output from matrix vector multiplication is simply the input vector in terms of another basis. The column vectors of the matrix correspond to the coordinates of the new basis.

The columns of a transition matrix from an old basis to a new basis are the coordinate vectors of the old basis relative to the new basis.

Consider the following matrix, \(T_{Shear} = \begin{bmatrix} {\color{vred}1} & {\color{vblue}0} \\ {\color{vred}1} & {\color{vblue}1} \end{bmatrix} \)

If we multiply any vector \(\begin{bmatrix} v_1 \\ v_2 \end{bmatrix} \in \mathbb{R}^2 \) with \( T_{Shear} \) we will get as output another vector \(\begin{bmatrix} w_1 \\ w_2 \end{bmatrix} \in \mathbb{R}^2 \), \( T_{Shear}: \mathbb{R^2} \mapsto \mathbb{R^2} \) .

\( \overrightarrow{w} \) will be a linear combination of the matrices column vectors, \( \overrightarrow{w} = v_1{\color{vred}\begin{bmatrix} 1 \\ 1 \end{bmatrix}} + v_2{\color{vblue}\begin{bmatrix} 0 \\ 1 \end{bmatrix}} \).

Since the column vectors form a basis for \( \mathbb{R}^2 \) we can view this matrix as describing a change of basis where \( \overrightarrow{i} \mapsto {\color{vred}\begin{bmatrix} 1 \\ 1 \end{bmatrix}} \) and \( \overrightarrow{j} \mapsto {\color{vblue}\begin{bmatrix} 0 \\ 1 \end{bmatrix}} \)

Each dot in the animation corresponds to a vector, and the effect of \( T_{Shear} \) can be visualized a shearing of \( \mathbb{R}^2 \) because every vector is sheared when multiplied with the matrix. In this manner matrices encode a transformation or movement of space.

To encode a rotation through a matrix make the column vectors have the coordinates of the old basis rotated.

For example the following matrix, \( \begin{bmatrix} {\color{vred}0} & {\color{vblue}-1} \\ {\color{vred}1} & {\color{vblue}0} \end{bmatrix} \) would encode a rotation by 90 degrees. This is because it would map \( \overrightarrow{i} \mapsto \color{vred} \begin{bmatrix} 0 \\ 1 \end{bmatrix} \) and \( \overrightarrow{j} \mapsto \color{vblue}\begin{bmatrix} -1 \\ 0 \end{bmatrix} \) , which is where \( \overrightarrow{i} \) and \( \overrightarrow{j} \) would land if they were rotated by 90 degrees. Multiplying any vector by this matrix would then express that vector as a linear combination of the rotated basis vectors.

For a general rotation the following matrix, \( T_{rot} = \begin{bmatrix} {\color{vred}cos \theta} & {\color{vblue}-sin \theta} \\ {\color{vred}sin \theta} & {\color{vblue}cos \theta} \end{bmatrix} \) would encode a rotation of \( \mathbb{R}^2 \) by \( \theta \) degrees because it would map \( \overrightarrow{i} \mapsto \color{vred} \begin{bmatrix} cos \theta \\ sin \theta \end{bmatrix} \) and \( \overrightarrow{j} \mapsto \color{vblue}\begin{bmatrix} -sin \theta \\ cos \theta \end{bmatrix} \)

Move the **slider** to change the angle of rotation and to show the matrix which would encode the rotation

\( \theta = \) 0.0\( \pi \) 0 °

To encode a scaling through a matrix make the column vectors have the coordinates of the old basis scaled.

For example the following matrix, \( \begin{bmatrix} {\color{vred}2} & {\color{vblue}0} \\ {\color{vred}0} & {\color{vblue}2} \end{bmatrix} \) would encode a scaling of space by a factor of 2. This is because it would map \( \overrightarrow{i} \mapsto \color{vred} \begin{bmatrix} 2 \\ 0 \end{bmatrix} \) and \( \overrightarrow{j} \mapsto \color{vblue}\begin{bmatrix} 0 \\ 2 \end{bmatrix} \) , which is where \( \overrightarrow{i} \) and \( \overrightarrow{j} \) would land if they were scaled by a factor of two. Multiplying any vector by this matrix would then express that vector as a linear combination of the scaled basis vectors.

For a general scaling the following matrix, \( T_{scale} = \begin{bmatrix} {\color{vred}s_x} & {\color{vblue}0} \\ {\color{vred}0} & {\color{vblue}s_y} \end{bmatrix} \) would encode a scaling of \( \mathbb{R}^2 \) by a factor of \( s_x \) in the direction of \( \overrightarrow{i} \) and by a factor of \( s_y \) in the direction of \( \overrightarrow{j} \) because it would map \( \overrightarrow{i} \mapsto \color{vred} \begin{bmatrix} s_x \\ 0 \end{bmatrix} \) and \( \overrightarrow{j} \mapsto \color{vblue}\begin{bmatrix} 0 \\ s_y \end{bmatrix} \)

The interactive program below shows how a matrix can encode a **scaling** of \( \mathbb{R^2}\).

Move the two **sliders** to change the factor by which space is scaled.

\( s_x = \) 1.0

\( s_y = \) 1.0