# Error in Hessian calculation using forward over backward propagation

System information

• Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
• TensorFlow installed from (source or binary): both colab CPU and GPU mode
• TensorFlow version (use command below):2.7.0
• Python version: Python 3.7.12
• GPU model and memory: Tesla P100

Describe the current behavior
I am trying to compute second derivative using the method provide by the tutorial using forward over backward propagation to calculate the hessian-vector-product. For memory reasons, we chose not to use the double backward method. In the code, we have nested loop and `tf.dynamic_partition`. Both the gradient and hessian works in the eager mode but when I try to decorate the function by `@tf.function` the error appears, and I found it’s because the combination of using `tf.dynamic_partition` and `for loop` gives the error.

Additionally, a different error is given when I try to decorate the `hvp` function. Without decorator, there’s a `TypeError `, with a decorator, it gives a `SystemError `

• Briefly describe your candidate solution(if contributing): I have tried to find a workaround to use `tf.gather` or use a loop to replace the `tf.dynamic_partition`, different problems also prompt.

Standalone code to reproduce the issue
I have made a reduced dummy code here and also in colab

``````import tensorflow as tf
import numpy as np
@tf.function # without the decorator, the function works fine in eager mode
def foo(mu):

partitions = tf.constant([1, 0, 0])
points = tf.dynamic_partition(mu, partitions, 2)[0]
block = points
# a dummy example of a loop
for j in tf.range(1): # without this loop, the function works fine
block =  points

return block
# dummy input data
mu = tf.constant([[3.,2.,1.],[3.,2.,1.],[3.,2.,1.]])
foo(mu)

@tf.function

t.watch(mu)
property = foo(mu)

return(loss)

# hessian vector product
@tf.function # with/without the decorator, different error prompt
def hvp(mu,tangents):
with tf.autodiff.ForwardAccumulator(mu, tangents) as acc:
t.watch(mu)
property = foo(mu)
print('tracing')
tf.print('executing')
hess = acc.jvp(loss)
return(hess)

tangents = np.zeros(mu.shape)
tangents[0]=1
tangents = tf.convert_to_tensor(tangents,dtype=tf.float32)
hess = hvp(mu,tangents)

``````

Other info / logs Include any logs or source code that would be helpful to
diagnose the problem. If including tracebacks, please include the full
traceback. Large logs and files should be attached.

``````---------------------------------------------------------------------------
StagingError                              Traceback (most recent call last)
<ipython-input-6-f99c3f212345> in <module>()
14 tangents[0]=1
15 tangents = tf.convert_to_tensor(tangents,dtype=tf.float32)
---> 16 hess = hvp(mu,tangents)
17
18

1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1127           except Exception as e:  # pylint:disable=broad-except