No gradients provided for any variable though get_grads is defined

I try to create implement a papaer I read, on which the loss has some kind of probablistic dependence on a variable in the network. Therefore I need an access to hidden variables. Therefore I use low level API for loss. Here is my model:

class probablistic_model(tf.keras.Model):
    def call(self,inputs):
        return self.auto_encoder(inputs),self.z



    # get gradients
    def get_grad(self, X, Y):
        return self.auto_encoder.get_grad(X,Y)
        with tf.GradientTape() as tape:

            L = self.get_loss(X, Y)
            g = tape.gradient(L,tf.ones(self.input.shape))
            #might be incorrect
        return g
    def get_loss(self,X, Y):
        with tf.GradientTape() as tape:
            z = self.z
            X=X[0]
            diff = (X - Y)
            diff = tf.norm(diff, ord=2, axis=1)
            diff *= 2
        
            diff *= z
            score = diff - λ * tf.norm(diff, ord=1)
        return score

    def __init__(self,dimension,sigma):
        super().__init__()
        self.z=probablistic_feature(dimension,sigma)
        self.auto_encoder=keras_auto_encoder(dimension[0], 30)
        self.λ=2e-1

but when I try to run the model

import csv
import sys
dataset=[]
tf.config.run_functions_eagerly(True)

dataset=np.loadtxt(sys.argv[1],delimiter=",")[1:-1,:]
model=probablistic_model(dataset.shape[::-1],1)
model.compile()
model.fit(x=dataset,y=dataset)

I get: ValueError: No gradients provided for any variable: [‘dense/kernel:0’, ‘dense/bias:0’, ‘dense_1/kernel:0’, ‘dense_1/bias:0’].

Though get_grads is defined (though not correctly). Why tensorflow ignores get_grads?

Because get_grads is not a standard API. Why did you think that would work?

If you want to implement the training step yourself see this doc: