A few days ago I wrote about eigenvector centrality, a way of computing which nodes in a network are most important. Rather than simply looking for the most highly connected nodes, it looks for nodes that are highly connected to nodes that are highly connected. It’s the idea behind Google’s PageRank algorithm.

## Adjacency matrices

One way to capture the structure of a graph is its adjacency matrix *A*. Each element *a*_{ij} of this matrix equals 1 if there is an edge between the *i*th and *j*th node and a 0 otherwise. If you square the matrix *A*, the (*i*, *j*) entry of the result tells you how many paths of length 2 are between the *i*th and *j*th nodes. In general, the (*i,* *j*) entry of *A*^{n} tells you how many paths of length *n* there are between the corresponding nodes.

## Power method

Calculating eigenvector centrality requires finding an eigenvector associated with the largest eigenvalue of *A*. One way to find such an eigenvector is the power method. You start with a random initial vector and repeatedly multiply it by *A*. This produces a sequence of vectors that converges to the eigenvector we’re after.

Conceptually this is the same as computing *A*^{n} first and multiplying it by the random initial vector. So not only does the matrix *A*^{n} count paths of length *n*, for large *n* it helps us compute eigenvector centrality.

## Theoretical details

Now for a little fine print. The power method will converge for any starting vector that has some component in the direction of the eigenvector you’re trying to find. This is almost certainly the case if you start with a vector chosen at random. The power method also requires that the matrix has a single eigenvector of largest magnitude and that its associated eigenspace have dimension 1. The post on eigenvector centrality stated that these conditions hold, provided the network is connected.

## Computational practicalities

In principle, you could use calculate eigenvector centrality by computing *A*^{n} for some large *n*. In practice, you’d never do that. For a square matrix of size *N*, a matrix-vector product takes *O*(*N*²) operations, whereas squaring *A* requires *O*(*N*³) operations. So you would repeatedly apply *A* to a vector rather than computing powers of *A*.

Also, you wouldn’t use the power method unless *A* is sparse, making it relatively efficient to multiply by *A*. If most of the entries of *A* are zeros, and there is an exploitable pattern to where the non-zero elements are located, you can multiply *A* by a vector with far less than *N*² operations.

The eigenvectors of the Laplacian matrix derived from a graph are very interesting as well. A number of topological properties can be determined this way, like number of isolated components, algebraic connectivity, minimum split location, etc.