Basics of Linear Algebra
Weikai Chen, 2021/03/11
This is a lecture note for Marxian Economic Thoery, a course at Renmin University of China. This note is mainly for senior or graduate students in econ major, so I assume that students have taken a course in linear algebra before.
The purpose of this note is to review the basic concepts and methods in linear algebra and prepare the students for the Perron-Frobenius Theorems about positive and nonnegative matrices. Nonnegative matrices arise in many areas such as economics, population models, graph theory, Markov chains and so on. The Perron-Frobenius theory is one of the most powerful tools on nonnegative matrices and the workhorse in mathematical Marxian economics. Given its importance and the fact that it is new to most students, I will discuss P-F theorems in a separate note.
This Note is written in Pluto Notebook, a reactive notebook for Julia.
Linear algebra studies the linear transformation on vector spaces, which can be represented by matrix. We will focus on the
Vectors in
A vector in
1.5
2.0
xxxxxxxxxx
x = [1.5, 2.0]
0.5
2.0
2.0
4.0
xxxxxxxxxx
z = x + y
3.0
4.0
xxxxxxxxxx
w = 2*x
Now let's plot those vectors.
Linear Combinations
Given a set of vectors
That is,
In this context, the values
The set of these linear combinations
Note that the cofficients of a linear combinbation
A set of vectors
A set of vectors
It can be shown that
If
, then .If
is independent, then .
Therefore,
Below is an example of basis for
For any
Inner Product and Norm
The inner product of vectors
The inner product is also denoted by
Two vectors are called orthogonal if their inner product is zero.
The norm of a vector
The expression
4.75
xxxxxxxxxx
dot(x,y) # the inner product of x and y
4.75
xxxxxxxxxx
x'*y # give the same result
2.5
xxxxxxxxxx
norm(x) # the norm of vector x
2.5
xxxxxxxxxx
sqrt(x'*x) # give the same result
1.0
xxxxxxxxxx
norm(x-y) # the distance between x and y
Matrices
A Matrice is a rectangular array of numbers.
is called an
The matrix formed by replacing
For a square matrix
A diagonal matrix
Denote each column of the matrix
Similarly we can write the matrix
Matrice Operation
Just as was the case for vectors, a number of algebraic operations are defined for matrices.
Scalar multiplication and addition are immediate generalizations of the vector case:
and
In the latter case, the matrices must have the same shape in order for the definition to make sense.
2×2 Array{Float64,2}:
3.0 1.0
2.0 5.0
xxxxxxxxxx
A = [3.0 1
2 5]
2×2 Adjoint{Float64,Array{Float64,2}}:
3.0 2.0
1.0 5.0
xxxxxxxxxx
B = A'
2×2 Array{Float64,2}:
6.0 3.0
3.0 10.0
xxxxxxxxxx
C = A + B
We also have a convention for multiplying matrix with vector.
For a square matrix
Another usefull form of
which is a linear combination of the set of column vectors
3.0
2.0
xxxxxxxxxx
a1 = A[:,1] # the first column
1.0
5.0
xxxxxxxxxx
a2 = A[:,2] # the second column
6.5
13.0
xxxxxxxxxx
b = A*x
6.5
13.0
xxxxxxxxxx
a1 * x[1] + a2 * x[2] # the same result
Matrix and Linear Transformation
Linear transformation and matrix representation
A function
For any
In effect, a function
Proof
First, let
Second, construct a matrix as follows: choose the standard basis
Finally, show that
Inverse of linear transformation and inverse matrix
What is the range of the function
Since
Moreover, if the columns are linearly independent, then the range is
It could be verified that
We called the matrix
and then
2×2 Array{Float64,2}:
0.384615 -0.0769231
-0.153846 0.230769
xxxxxxxxxx
inv(A) # the inverse of matrix A
1.5
2.0
xxxxxxxxxx
inv(A) * b # x = inverse(A)*b
Composition of linear transformation and matrix multiplication
If
is also linear, and thus can be represented by a
How can we calculate
then their product
That is,
2×2 Array{Float64,2}:
10.0 11.0
11.0 29.0
xxxxxxxxxx
D = A * B
Matrix and System of Linear Equations
Often, the numbers in the matrix represent coefficients in a system of linear equations
The objective here is to solve for the “unknowns”
This system of equations can be written as
or
Therefore, to solve
Note
(1) If the columns of
(2) If the columns of
1.5
2.0
xxxxxxxxxx
A\b # solve the system of equations Ax = b
Determinant
Given a square matrix
There is a function
In effect,
When
we have
The determinant of a matrix determines whether the column vectors are linearly independent or not.
13.0
xxxxxxxxxx
determinant = det(A) # the determinant of matrix A
13.0
xxxxxxxxxx
A[1,1] * A[2,2] - A[1,2] * A[2,1] # same result
I won't dig into details for the calculation of determinants in general. Instead, let's look at its geometric intuition.
Take the example of
The determinant
Since the area of the square is 1,
In this case, the area of the parallelogram is
If
Note that the determinant could be negative when the linear transformation flips the space. For example,
2×2 Array{Int64,2}:
1 3
5 2
xxxxxxxxxx
A_flip = [1 3; 5 2] # compare it with A = [3 1; 2 5]
-13.0
xxxxxxxxxx
det(A_flip) # det(A_flip) = -det(A)
13.0
xxxxxxxxxx
det(A') # det(A') = det(A)
Eigenvalue and Eigenvector
Let
If
then we say that
Thus, an eigenvector of
Eigen{Float64,Float64,Array{Float64,2},Array{Float64,1}}
values:
2-element Array{Float64,1}:
2.267949192431123
5.732050807568878
vectors:
2×2 Array{Float64,2}:
-0.806898 -0.343724
0.59069 -0.939071
xxxxxxxxxx
evals, evecs = eigen(A) # find all eigenvalues and corresponding eigenvacters
-0.806898
0.59069
xxxxxxxxxx
v = evecs[:,1] # one eigenvector
-1.83
1.33966
xxxxxxxxxx
A * v # Av
-1.83
1.33966
xxxxxxxxxx
evals[1] * v # lambda_1 * v
-0.343724
-0.939071
xxxxxxxxxx
u = evecs[:,2] # another eigenvector
-1.97024
-5.3828
xxxxxxxxxx
A * u # Au
-1.97024
-5.3828
xxxxxxxxxx
evals[2] * evecs[:,2] # lambda_2 u
The next figure shows two eigenvectors,
Suppose that
has a non-zero solution. Or the columns of the matrix
The next figure shows the plot of the characteristic polynominal
xxxxxxxxxx
begin
Determinant(λ;matrix=A) = det(λ * Matrix(I,2,2) - matrix)
λ = 1:0.01:7;
determinant_λ = [Determinant(i) for i in λ];
plot(λ, determinant_λ,legend = false, framestyle = :origin)
plot!(xlab = L"\lambda", ylab = L"\det(\lambda I- A)")
end