ZeroDivisionError with Dormand-Price ODE Solver Gradients

Hello,

I’m trying to implement a mechanistic model using TensorFlow which will be used as part of a GAN, based on the approach shown in this paper: https://arxiv.org/abs/2009.08267

The mechanistic model uses a TF Dormand-Prince solver to solve a set of differential equations which yield pressure waveforms for different regions of the cardiovascular system. I want to get gradients of the waveforms with respect to parameters of the mechanistic model for training the generator of the GAN.

A couple of my differential equations incorporate a variable which is time-varying (piecewise but continuous, no “sharp corners”) and which is computed from a subset of the parameters to the mechanistic model. If I set this variable to a constant, I can get gradients of the waveforms wrt model parameters. However, if I keep this variable as time-varying, then I get a ZeroDivisionError when I try to compute the gradients.

Any idea why this error might appear? I have included a stack trace below.

Thanks a lot for your help!

---------------------------------------------------------------------------

ZeroDivisionError Traceback (most recent call last)
in
----> 1 dy6_dX = tape.gradient(y6, X)

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/eager/backprop.py in gradient(self, target, sources, output_gradients, unconnected_gradients)
1078 output_gradients=output_gradients,
1079 sources_raw=flat_sources_raw,
→ 1080 unconnected_gradients=unconnected_gradients)
1081
1082 if not self._persistent:

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/eager/imperative_grad.py in imperative_grad(tape, target, sources, output_gradients, sources_raw, unconnected_gradients)
75 output_gradients,
76 sources_raw,
—> 77 compat.as_str(unconnected_gradients.value))

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/ops/custom_gradient.py in actual_grad_fn(*result_grads)
472 “@custom_gradient grad_fn.”)
473 else:
→ 474 input_grads = grad_fn(*result_grads)
475 variable_grads = []
476 flat_grads = nest.flatten(input_grads)

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow_probability/python/math/ode/base.py in grad_fn(*dresults, **kwargs)
454 initial_time=result_time_array.read(initial_n),
455 initial_state=make_augmented_state(initial_n,
→ 456 terminal_augmented_state),
457 )
458

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow_probability/python/math/ode/dormand_prince.py in _initialize_solver_internal_state(self, ode_fn, initial_time, initial_state)
307 p = self._prepare_common_params(initial_state, initial_time)
308
→ 309 initial_derivative = ode_fn(p.initial_time, p.initial_state)
310 initial_derivative = tf.nest.map_structure(tf.convert_to_tensor,
311 initial_derivative)

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow_probability/python/math/ode/base.py in augmented_ode_fn(backward_time, augmented_state)
388 adjoint_constants_ode) = tape.gradient(
389 adjoint_dot_derivatives, (state, tuple(variables), constants),
→ 390 unconnected_gradients=tf.UnconnectedGradients.ZERO)
391 return (negative_derivatives, adjoint_ode, adjoint_variables_ode,
392 adjoint_constants_ode)

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/eager/backprop.py in gradient(self, target, sources, output_gradients, unconnected_gradients)
1078 output_gradients=output_gradients,
1079 sources_raw=flat_sources_raw,
→ 1080 unconnected_gradients=unconnected_gradients)
1081
1082 if not self._persistent:

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/eager/imperative_grad.py in imperative_grad(tape, target, sources, output_gradients, sources_raw, unconnected_gradients)
75 output_gradients,
76 sources_raw,
—> 77 compat.as_str(unconnected_gradients.value))

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/eager/backprop.py in _gradient_function(op_name, attr_tuple, num_inputs, inputs, outputs, out_grads, skip_input_indices, forward_pass_name_scope)
157 gradient_name_scope += forward_pass_name_scope + “/”
158 with ops.name_scope(gradient_name_scope):
→ 159 return grad_fn(mock_op, *out_grads)
160 else:
161 return grad_fn(mock_op, *out_grads)

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/ops/array_grad.py in _ConcatGradV2(op, grad)
228 def _ConcatGradV2(op, grad):
229 return _ConcatGradHelper(
→ 230 op, grad, start_value_index=0, end_value_index=-1, dim_index=-1)
231
232

~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/ops/array_grad.py in _ConcatGradHelper(op, grad, start_value_index, end_value_index, dim_index)
117 # in concat implementation to be within the allowed [-rank, rank) range.
118 non_neg_concat_dim = (
→ 119 concat_dim._numpy().item(0) % input_values[0]._rank()) # pylint: disable=protected-access
120 # All inputs are guaranteed to be EagerTensors in eager mode
121 sizes = pywrap_tfe.TFE_Py_TensorShapeSlice(input_values,

ZeroDivisionError: integer division or modulo by zero

I was able to resolve this issue! I realized that the parameters which were giving me errors were passed into the function using them as 0-dimensional tensors rather than 1-d tensors; changing to pass them in as 1-d tensors alone solved the issue.

Not sure why such a minor difference (0-d vs 1-d) would result in a ZeroDivisionError - if anyone has suggestions on why, I’d really appreciate it! Thank you so much!