Fig. 2: The choice of the eigenvectors. | Nature Communications

Fig. 2: The choice of the eigenvectors.

From: Machine learning in spectral domain

Fig. 2

a The structure of matrix Φk is schematically depicted. The diagonal entries of Φk are unities. The sub-diagonal block of size Nk+1 × Nk for k = 1,  − 1 is filled with uniform random numbers in [a, b], with \(a,b\in {\Bbb{R}}\). These blocks yield an effective indentation between successive stacks of linearly independent eigenvectors. The diagonal matrix of the eigenvalues Λk is also represented. The sub-portions of Φk and Λk that get modified by the training performed in spectral domain are highlighted (see legend). In the experiments reported in this paper, the initial eigenvectors entries are uniform random variables distributed in [−0.5, 0.5]. The eigenvalues are uniform random numbers distributed in the interval [−0.01, 0.01]. Optimising the range to which the initial guesses belong (for both eigenvalues and eigenvectors) is an open problem that we have not tackled. b A (N1 + N) × (N1 + N) matrix \({{\mathcal{A}}}_{c}\) can be obtained from \({\mathcal{A}}=\left({{{\Pi }}}_{k = 1}^{\ell -1}{{\bf{A}}}_{k}\right)\), which provides the weights for a single-layer perceptron, that maps the input into the output, in direct space.

Back to article page