Let A \in \R^{N \times N}, P \in \R^{N \times N}, and \set{\lambda_n \in \R}_{n=1}^N.
The diagonalization of A is described as:
\begin{align*}
A = P \Lambda P^{-1}
,\quad
\Lambda = \begin{pmatrix}
\lambda_1 & & & \\
& \lambda_2 & & \\
& & \ddots & \\
& & & \lambda_N \\
\end{pmatrix}
\end{align*}
where \lambda_1, \lambda_2, \dots, \lambda_N are eigenvalues of P, and the n-th column vector P_n \in \R^N is the n-th eigenvector corresponding to \lambda_n.
\begin{align*}
A P_n &= \lambda_n P_n \\
\end{align*}
because
\begin{align*}
A P_n &= P \Lambda P^{-1} P_n \\
&= P \Lambda e_n \\
&= P \lambda_n e_n \\
&= \lambda_n P e_n \\
&= \lambda_n p_n \\
\end{align*}
Let's consider A y where A \in \R^{N \times N}, y \in \R^N.
When we have the diagonalization A = P \Lambda P^{-1}, we can calculate A y more easily, like,
\begin{align*}
A y &\overset{\flat}{=} A \left( \sum_{n=1}^N w_n P_n \right) \\
&= \sum_{n=1}^N w_n A P_n \\
&= \sum_{n=1}^N w_n (\lambda_n P_n) \\
\end{align*}
\overset{\flat}{=} comes from y being represented as y = \sum_{n=1}^N w_n P_n (see: this link).
And w = A^{-1}y, \quad w = \set{w_n}_{n=1}^N.
Discussion