source

# Eigenvectors from Eigenvalues¶

#### Corey Chivers @cjbayesian¶

In the recent paper from Peter B. Denton, Stephen J. Parke, Terence Tao, Xining Zhang, a new proof is presented demonstrating the contruction of the squared norm of the eigenvectors of a Hermitian matrix using only the Eigenvalues and the submatrix Eigenvalues.

From the abstract:

We present a new method of succinctly determining eigenvectors from eigenvalues. Specifically, we relate the norm squared of the elements of eigenvectors to the eigenvalues and the submatrix eigenvalues.

This seems like a cool result. This is an implementation of the key identity in the paper which confirms the result computationally.

https://arxiv.org/pdf/1908.03795.pdf

The key identity they discovered is,

Let $A$ be a $n \times n$ Hermitian matrix with eigenvalues $\lambda_i(A)$ and normed eigenvectors $\nu_i$. Let $M_j$ be the $n - 1 \times n-1$ submatrix of $A$ that results from deleting the $j^{th}$ column and the $j^{th}$ row, with eigenvalues $\lambda_k(M_j)$.

$$|\nu_{i,1}|^2 \prod_{k=1;k\ne i}^{n}(\lambda_i(A) - \lambda_k(A)) = \prod_{k=1}^{n-1}(\lambda_i(A) - \lambda_k(M_j)).$$

Rearanging, we get a formula for directly computing the squared norms of the Eigenvectors using only the Eigenvalues of a matrix $A$ and it's submatrices $M$.

$$|\nu_{i,j}|^2 = \frac{\prod_{k=1}^{n-1}(\lambda_i(A) - \lambda_k(M_j))}{\prod_{k=1;k\ne i}^{n}(\lambda_i(A) - \lambda_k(A))}$$

Let's see if it works!

In :
%pylab inline

Populating the interactive namespace from numpy and matplotlib

In :
from numpy import linalg as LA


Generate a random Hermitian matrix

In :
n = 5
N = n * n
A = np.random.normal(0,1,N).reshape((n,n))
for i in range(n):
for j in range(i,n):
A[j,i] = A[i,j]

def is_hermitian(A):
return (A.T == A).sum() == np.prod(A.shape)

is_hermitian(A)

Out:
True

We'll use the numpy builtin for computing the Eigenvalues.

In :
LA.eigvals(A)

Out:
array([-3.8997065 , -2.16114163,  0.40540416,  1.83168771,  3.35678958])

This implementation is not meant to be optimized for computation, but rather for readability. We first pre-compute the Eigenvalues of each of the submatrices, We then build up the squared norms element by element. Finally, we take the squareroot and compare it against the Eigenvectors obtained via numpy.

In :
lambda_A = LA.eigvals(A)
lambda_M = np.ndarray((n,n-1))
for j in range(n):
idx = [r for r in range(n) if r!=j]
Mj = A[idx,:][:,idx]
lambda_M[j,:] = LA.eigvals(Mj)

v_norm = np.zeros(n*n).reshape((n,n))
for i in range(n):
for j in range(n):
numerator = 1
denominator = 1
for k in range(n-1):
numerator *= lambda_A[i] - lambda_M[j,k]
for k in range(n):
if (k != i):
denominator *= lambda_A[i] - lambda_A[k]
v_norm[i,j] = numerator/denominator

np.sqrt(v_norm).T

Out:
array([[0.08528184, 0.27970007, 0.94307936, 0.15063128, 0.0490553 ],
[0.40224246, 0.3381635 , 0.00614825, 0.79876121, 0.29289789],
[0.5378696 , 0.47276685, 0.08457036, 0.17450692, 0.6705095 ],
[0.7003725 , 0.10930429, 0.08303973, 0.48950144, 0.50102263],
[0.22605898, 0.75627865, 0.31066989, 0.26310915, 0.45956255]])

Compute the Eigenvectors using numpy built-in for comparison.

In :
_, w = LA.eig(A)
np.abs(w)

Out:
array([[0.08528184, 0.27970007, 0.94307936, 0.15063128, 0.0490553 ],
[0.40224246, 0.3381635 , 0.00614825, 0.79876121, 0.29289789],
[0.5378696 , 0.47276685, 0.08457036, 0.17450692, 0.6705095 ],
[0.7003725 , 0.10930429, 0.08303973, 0.48950144, 0.50102263],
[0.22605898, 0.75627865, 0.31066989, 0.26310915, 0.45956255]])

Visual inspection suggests they are the same. Due to floating point errors, confirm that the absolute difference between them is below some small error tolerance.

In :
tol = 1e-10
np.abs(np.sqrt(v_norm).T - np.abs(w)) < tol

Out:
array([[ True,  True,  True,  True,  True],
[ True,  True,  True,  True,  True],
[ True,  True,  True,  True,  True],
[ True,  True,  True,  True,  True],
[ True,  True,  True,  True,  True]])

# Neat-o!¶

In [ ]: