Dear Community

I am currently faced with the following task:

I have a long expression (function) that looks somewhat similar to

```
J = - (Gp*((gm15 + gs15)*((KP1*gds16*ibias^2*lambda1*m*(gds16 + gm16)*(Lbase*gs12 + 2^(1/2)*Lbase*(Gref*KP12*r12*vref)^(1/2) + Gref*lambda12*vref)*(KP3*ibias*r3)^(1/2)* [...]
```

In reality, the expression is much, much longer (speaking of tens of thousands of symbols). All variables are scalars, and only addition, subtraction, multiplication, division, and square roots occur. Most of the variables are just constants; only a few of them are my inputs to this function.

The task is to numerically compute the gradient of this expression with respect to the input variables.

**Current approach**

I let a computer algebra tool symbolically compute the gradients. Then I created a function similar to this:

```
def compute_gradient(<very_long_list_of_variables_and_constants>):
return <very_long_expression_of_the_gradient>
```

If I do so, the computer runs for hours without ever returning a result.

**Solution idea**

The idea is now to use the backpropagation mechanism to compute those gradients.

a) If a computer fails to calculate the gradient in the “usual way”, is there a chance that backpropagation provides a result in reasonable time? What is the limit?

b) Is there a tool that converts an expression like the one above automatically to a computational graph that can be used e.g. in Keras or Tensorflow?

c) Are there any suggestions for optimization/parallelization? I assume that a long expression like this will result in a very deep computational graph that cannot necessarily benefit from parallel computations on a GPU. Anything I can do to increase the performance?

Thank you for any suggestions.