The Traditional notion of a matrix is:
of known or unknown numbers
One simple role for a matrix: packing together a bunch of columns or rows
Matrix <math>A_(i,j)</math> of <math>A[i,j]</math>
<math>j=@</math> | <math>j=\#</math> | <math>j=\%</math> | |
---|---|---|---|
<math>i=a</math> | 1 | 2 | 3 |
<math>i=b</math> | 4 | 5 | 6 |
where:
Index | Type | In Matrix A |
---|---|---|
i | rows | 2 rows [1,2,3] and [4,5,6] |
j | columns | 3 columns: [1,4], [2,5] and [3,6] |
Rows and columns are vectors:
The size, or order, of a matrix is given by identifying the number of rows and columns.
The order of matrix A is 2×3
Matrix as a function: An R x C Matrix over the field F is a function from R x C to F where:
As it's a function it can be interpreted as an R x C-vector:
From matrix to function: <math>f (x) = M * x</math> where M is a matrix
If M is an R x C matrix over <math>\mathbb{F}</math> then
Example of function:
Let M be the matrix <math> \begin{array}{r|rrr} & \# & @ & ? \\ \hline a & 1 & 2 & 3 \\ b & 10 & 20 & 30 \end{array} </math> and define <math>f ({\bf x}) = M * {\bf x}</math> then:
f maps <math> \begin{array}{rrr} \# & @ & ? \\ \hline 2 & 2 & -2 \\ \end{array} </math> to <math> \begin{array}{rr} a & b \\ \hline 0 & 0 \\ \end{array} </math>
Define <math> f({\bf x}) = \begin{bmatrix} 1 & 2 & 3 10 & 20 & 30 \end{bmatrix} * {\bf x} </math> :
f maps [2, 2,-2] to [0,0]
For the following function
we know:
Transpose swaps rows and columns. See Dimensional Data Operation - (Pivot|Transpose|Cross-tab|Matrix)
<MATH> \begin{bmatrix} \begin{array}{rrr} 4 & 1 & -3 \\ 2 & 2 & -2 \end{array} \end{bmatrix}^T = \begin{bmatrix} \begin{array}{rr} 4 & 2 \\ 1 & 2 \\ -3 & -2 \end{array} \end{bmatrix} </MATH>
<MATH> \begin{bmatrix} \begin{array}{r} 4 \\ 1 \\ -3 \end{array} \end{bmatrix}^T = \begin{bmatrix} \begin{array}{rrr} 4 & 1 & -3 \end{array} \end{bmatrix} </MATH>
<MATH> \begin{bmatrix} \begin{array}{rr} 4 & 2 \\ 1 & 2 \\ -3 & -2 \end{array} \end{bmatrix}^T = \begin{bmatrix} \begin{array}{rrr} 4 & 1 & -3 \\ 2 & 2 & -2 \\ \end{array} \end{bmatrix} </MATH>
Only square matrices can be inverted, and square matrices are not guaranteed to have an inverse. If the inverse exists, then multiplying a matrix by its inverse will produce the identity matrix.
Theorem: The transpose of an invertible matrix is invertible.
If <math>A</math> has an inverse <math>A^{-1}</math> then <math>AA^{-1}</math> is identity matrix
Converse: If BA is identity matrix then A and B are inverses? Not always true.
Theorem: Suppose A and B are square matrices such that BA is an identity matrix 1. Then A and B are inverses of each other.
Matrices A and B are inverses of each other if and only if both AB and BA are identity matrices.
<MATH> A B = A A^{-1} = I_n </MATH>
A invertible matrix has an inverse.
Corollary
Let A be an R x C matrix. Then A is Linear Algebra - Function (Set) if and only if |R| = |C| and the columns of A are linearly independent.
Proof: Let <math>\mathbb{F}</math> be the field. Define <math>f : \mathbb{F}^C \mapsto \mathbb{F}^R</math> by <math>f (x) = Ax</math> Then A is an invertible matrix if and only if f is an invertible function.
The function f is invertible
nullity A = 0
Example:
Null space of a matrix A (Written Null A) is: <MATH> \{u : A * u = 0\} </MATH>
The nullity of matrix A is the dimension of the Null Space written:
A vector can be:
D x D identity matrix is the matrix <math>1_D</math> such that <math>1_D [k, k] = 1 \text{ for all } k \in D</math> and zero elsewhere.
Often letter I (for “identity”) is used instead of 1
<MATH> \mathbf{I_n} = \begin{bmatrix} 1 & 0 & 0 & \dots & 0 \\\ 0 & 1 & 0 & \dots & 0 \\\ 0 & 0 & 1 & \dots & 0 \\\ \vdots & \vdots & \vdots & \ddots & \vdots \\\ 0 & 0 & 0 & \dots & 1 \end{bmatrix} </MATH>
def identity(D): return Mat((D,D), {(k,k):1 for k in D})
Let <math>d_1, \dots , d_n</math> be real numbers. Let <math>f : \mathbb{R}^n \rightarrow \mathbb{R}^n </math> be the function such that <math>f ([x_1, \dots , x_n]) = [d_1*x_1, \dots , d_n*x_n]</math> .
For a domain D, a D x D matrix M is a diagonal matrix if M[r , c] = 0 for every pair <math>r, c \in D</math> such that <math>r \neq c</math> .
A matrix is called a diagonal matrix when the only entries allowed to be nonzero form a diagonal
Linear Algebra - Triangular Matrix
If columns of a matrix are orthonormal then it's a column-orthogonal matrix (should be called orthonormal ….)
If a matrix is square and column-orthogonal, it's an orthogonal matrix
For a matrix M:
Equivalently, the row rank of M is the dimension of Row M, and the column rank of M is the dimension of Col M.
Consider the matrix <MATH> M = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 2 & 4 & 0 \end{bmatrix} </MATH> where:
Rank Theorem: For every matrix M, row rank equals column rank.
Lemma: For any matrix A, row rank of A column rank of A
A sparse matrix has many positions with a value of zero.
Systems designed to efficiently support sparse matrices look a lot like databases: They represent each cell as a record (i,j,value).
The benefit is that you only need one record for every non-zero element of a matrix.
For example, the matrix
0 | 2 | -1 |
1 | 0 | 0 |
0 | 0 | -3 |
0 | 0 | 0 |
can be represented as a table
row # (i) | column # (j) | value |
---|---|---|
0 | 1 | 2 |
0 | 2 | -1 |
1 | 0 | 1 |
2 | 2 | -3 |
Two matrices may be added or subtracted only if they are of the same order.