Getting gradient as a sequential model

Hi,

Assume model is a fully connected, 8-layer deep Keras sequential model I have pre-trained that takes in as input a two-dimensional tensor, [t,x].

I am interested in being able to obtain the partial derivative of this model with respect to the input component x also as a sequential model (depending on the original model and the graph corresponding to its gradient).

With the following code, I am able to visualize the graph of this model and partial derivative in Tensorboard:

def compute_derivative(model, inp):
    with tf.GradientTape(persistent=True) as tape:
        t, x = inp[:, 0:1], inp[:,1:2]
        tape.watch(t)
        tape.watch(x)

        u = model(tf.stack([t[:,0], x[:,0]], axis=1))
        u_x = tape.gradient(u, x)
    
    return u_x

@tf.function
def traceme(x):
    return get_residual(model, x)

logdir = "/tmp/logs"
writer = tf.summary.create_file_writer(logdir)
tf.summary.trace_on(graph=True, profiler=True)

traceme(tf.zeros((1, 2)))
with writer.as_default():
    tf.summary.trace_export(name="trace", step=0, profiler_outdir=logdir)

And the output looks like this:

So I can see that theoretically this should be accessible somehow.

But given u_x in the code, how do I backtrack in the sequential model that generated that partial derivative? I feel like this requires some graph magic I am not aware of.

Any pointers would be super helpful. Thank you so much!

Hi @fgirbal, You can calculate the gradients of a sequential model using gradient tape. For example,

model=tf.keras.Sequential([Flatten(input_shape=(2, 2)),
                           Dense(28, activation='sigmoid'),
                           Dense(1, activation='sigmoid')])

inp = np.random.rand(1, 2, 2)
inp_tf = tf.convert_to_tensor(inp, np.float32)
with tf.GradientTape(persistent=True) as tape:
    tape.watch(inp_tf)
    f = model(inp_tf)
grad_f = tape.gradient(f, inp_tf)
df_dx = grad_f[0, 0]
df_dt = grad_f[0, 1]
j = tape.jacobian(grad_f, inp_tf)

Thank You.