# Eigenvectors

## Notes to self

Eigenvectors are vectors that **only scale** (i.e., change in magnitude, not
direction) when a given linear transformation (represented by a matrix)
is applied to them. The scaling factor by which an eigenvector is multiplied
when the transformation is applied is called an eigenvalue.

Given a square matrix $A$ any vector $v$ is considered an eigenvector of $A$ if $v$ is not the zero vector and there is some scalar $\lambda$ such that applying $A$ to $v$ results in a scalar multiple of $v$, i.e., the direction of $v$ remains unchanged. In equation form, this is written as: $A \cdot v = \lambda \cdot v$, where $"\cdot"$ denotes the multiplication operation (either matrix multiplication or scalar multiplication, depending on context).

$\lambda$ is the eigenvalue corresponding to the eigenvector $v$
in the above equation. It represents the scalar multiple by which the
eigenvector is *stretched* or *compressed* (if you can't recall linear transformations
you can refer Khan Academy's
Matrix Transformations lecture for a refresher).

To find the eigenvalues of a matrix $A$, we follow two steps. First we set up the characteristic equation, and then we solve for $\lambda$:

- ☝️
**Characteristic Equation:**You set up the equation $det(A-\lambda \cdot I)=0$, where $det$ represents the determinant of a matrix, and $I$ is the identity matrix of the same size as $A$. This equation is derived from the eigenvector equation $A \cdot v= \lambda \cdot v$ and the fact that $v$ is non-zero. - ✌️
**Solve for $\lambda$**: Solving the characteristic equation will give you the eigenvalues $\lambda$ of the matrix $A$.

Once the eigenvalues $\lambda$ are known, the eigenvectors can be found by:

- 👉Substitution: For each eigenvalue $λ$, you substitute $\lambda$ back into the equation $A \cdot v= \lambda \cdot v$ (which can be rewritten as $(A− \lambda \cdot I) \cdot v=0)$ and solve for $v$.
- 👉Solving the System: Typically, you'll get a system of linear equations for $v$, which you'll need to solve. Any non-zero vector that satisfies the system of equations is considered an eigenvector corresponding to the eigenvalue $\lambda$.

Let's consider a `2x2`

matrix $A = \begin{bmatrix} 4 & 1 \\ 2 & 3 \end{bmatrix}$

- Characteristic Equation: First, we find the determinant of $A - \lambda \cdot I$

- Solving for $\lambda$: We solve $\lambda^2 - 7\lambda + 10$ to find the eigenvalues. The solution to this quadratic equation are the eigenvalues of $A$, which are $\lambda = 5$ and $\lambda = 2$.

Now comes the real magic. We can find the eigenvectors by plugging each eigenvalue into the equation $(A− \lambda \cdot I) \cdot v=0$ and solving for $v$. For $\lambda = 5$:

The system simplifies to $-v_1 + v_2 = 0$, so one eigenvector *could* be
$v = [1, 1]$ for $\lambda = 5$.

Similarly, for $\lambda = 2$, the system simplifies to $2v_1 + v_2 = 0$, so one
eigenvector *could* be $v = [1, -2]$ for $\lambda = 2$

This process reveals the eigenvalues $\lambda=5$ and $\lambda=2$, with corresponding eigenvectors $[1,1]$ and $[1,−2]$, respectively. Each eigenvector is associated with one eigenvalue, and these vectors indicate the "directions" in which the linear transformation represented by matrix $A$ acts by stretching/compressing, without rotating.

Using `numpy`

to find the eigenvalues and eigenvectors.

In this output, the eigenvalues are `5`

and `2`

, which match the
mathematical solution I calculated. The eigenvectors in `numpy`

are normalized (i.e., their "unit length" of 1 in Euclidean space),
so they may look different from the one I calculate by hand, but they
are indeed pointing in the same directions. The first eigenvector
is approximately $[0.707, 0.707]$, which points in the same
direction as $[1,1]$, and the second eigenvector is approximately
$[−0.447, 0.894]$, which points in the same direction as
$[1,−2]$. The direction is the critical property of the
eigenvector, not the magnitude.

We can verify this by normalizing the vector. It involves dividing each component of the vector by its length. For example, suppose the vector is $[1, -2]$.

First, we calculate the magnitude ($m$) (Euclidean norm): $\small{m = \sqrt{(1)^2 + (-2)^2} = \sqrt{1+4} = \sqrt{5}}$ Then, divide each component of the original vector by this magnitude: $\text{normalized}\ \text{vector} = \begin{bmatrix} \frac{1}{\sqrt{5}},\frac{-2}{\sqrt{5}}\end{bmatrix}$

Or just use `numpy`

.

Both approaches will give us the same normalized vector:

Dassit 👋

# Reading list

- Stanford Spectral Graph Theory
- CMU: Spectral Graph Theory and its Applications
- Yale Spectral Graph Theory

# Well, now what?

You can navigate to more writings from here. Connect with me on LinkedIn for a chat.

# 2024

## February

# 2023

## December

## October

## August

## June

## May

## March

## January