Compute generalised eigenvectors

Hi,
I wonder if it is possible to use TensorFlow also to compute generalised eigenvectors of the following (discretised Laplace Beltrami) problem:

Kv=l M v
where K and M are two symmetric matrices.

the tf.linalg.eig function seems computing the eigenvectors only for M=Id

Thanks,

Cesare

maybe @markdaoust can help here

You need to write it by hand. Here is my code for it (see Numerical Recipes section 11.0.5), and if you want to use it you need to keep A and B Hermitian matrices and test it:

def generalized_eigen(A, B, eigvals_high_to_low=True):
    # see NR section 11.0.5
    # L is a lower triangular matrix from the Cholesky decomposition of B
    L = tf.linalg.cholesky(B)
    # solve Y * L^T = A by a workaround
    # if https://github.com/tensorflow/tensorflow/issues/55371 is solved then this can be simplified
    Y = tf.transpose(tf.linalg.triangular_solve(L, tf.transpose(A)))
    # solve L * C = Y
    C = tf.linalg.triangular_solve(L, Y)
    # solve the equivalent eigenvalue problem
    e, v = tf.linalg.eigh(C)
    # reverse the sorting order to non-increasing
    e_rev = tf.reverse(e, axis=[-1])
    v_rev = tf.reverse(v, axis=[-1])
    e = tf.cond(eigvals_high_to_low, lambda: e_rev, lambda: e)
    v = tf.cond(eigvals_high_to_low, lambda: v_rev, lambda: v)
    # e = tf.reverse(e, axis=[-1])
    # v = tf.reverse(v, axis=[-1])
    # solve L^T * x = v, where x is the eigenvectors of the original problem
    v = tf.linalg.triangular_solve(tf.transpose(L), v, lower=False)
    # # normalize the eigenvectors
    # v = tf.math.l2_normalize(v, axis=0)
    return e, v
2 Likes

Thanks.
Does it work also for sparse tensors? Or there is something more efficient?

I have no knowledge about sparse tensors since I don’t use it in my case.
As for the efficiency, it depends on what you need. Do you think it is inefficient?
In the actual calculation of loss I don’t use the above routine, since I just need a sum of generalized eigenvalues, so instead, I compute the trace of pseudoinverse(B)*A (you can refer to the math proof in linear algebra - Proof that the trace of a matrix is the sum of its eigenvalues - Mathematics Stack Exchange). Do you need both exactly both eigenvectors and eigenvalues in optimization?

Yes, I need both eigenvectors and eigenvalues (not all of them but a subset; says the first 100).
I’m using sparse tensors as I’m using sparse matrices possibly with large sizes.
The function works if I convert first the tensors to dense (tf.sparse.to_dense(A),tf.sparse.to_dense(B)); however, if matrices have large size, this might fill the GPU memory quite quickly.
I wonder if something exists under TensorFlow that works like linalg.eigs of scipy sparse:

https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.eigs.html

Unfortunately this is not implemented yet:

On the other hand, I have checked the pytorch documentation, and the torch.lobpcg should be exactly what you want as you mentioned the “first N eigenvalues and eigenvectors”, but it does not support backward propagation (see the warning in the following link) for sparse tensor yet:
https://pytorch.org/docs/stable/generated/torch.lobpcg.html
So, what you need may not have a good solution.

Thanks.
Concerning linear algebra (the feature request you posted), I also tried to implement the conjugate gradient for sparse tensors to invert matrices: It works but is very slow if compared with scipy. I have no idea why.