Autograd unable to find gradient

I’m trying to use autograd to find the hessian of a cost function for a problem I want to solve, but Tensorflow doesn’t seem to be able to calculate the gradient and I’m not sure why. My code looks like:

import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf

from AFutilsPhaseMag import getParams

# get shared sim parameters (these don't change w/in a run in this test)
[f0, rho, tht, x, y, fi] = getParams()

# convert to tf constants for use in calculations
a = tf.constant(np.array([1., 1.]), name='a', dtype = tf.complex128)
alpha = tf.constant(np.array([0., 0.]), name='alpha', dtype = tf.complex128)
x = tf.constant(x, name = 'x', dtype = tf.complex128)
y = tf.constant(y, name = 'y', dtype = tf.complex128)
rho = tf.constant(rho, name = 'rho', dtype = tf.complex128)
tht = tf.constant(tht, name = 'tht', dtype = tf.complex128)

# record everything on a gradient tape
with tf.GradientTape() as tape:
    # watch the optimized agent params
    tape.watch(a)
    tape.watch(alpha)
    
    # get num Tx and Rx
    Na = x.numpy().size
    Ns = rho.numpy().size
    
    # make placeholder gains and storage for rec'd AF
    # TODO: gain calculation - this is just to get working grad calculations
    k = tf.constant(tf.ones((Na,1),dtype = tf.complex128), dtype = tf.complex128)
    AF = tf.Variable(tf.zeros((Ns,1), dtype = tf.complex128), dtype=tf.complex128)
    
    # calculated rec'd AF (for each Rx find and sum the contribution from each Tx)
    for rec in range(Ns):
        nextAF = tf.Variable(np.array(0), dtype = tf.complex128)
        for agent in range(Na):
            nextAF = nextAF + tf.exp(1j*alpha[agent] + k[agent]*x[agent]*tf.math.cos(tht[rec]) + k[agent]*y[agent]*tf.math.sin(tht[rec]))
            AF[rec].assign(nextAF)
            
    # convert to dB
    AF = 20*tf.math.log(AF)
    
    # find total error
    err = tf.Variable(tf.reduce_sum(tf.abs(fi - AF)), name = 'err')
    
# get gradient of error w/r.t. optimized paams
gradVars = {'a': a, 'alpha': alpha}
grad = tape.gradient(err, gradVars)

Where getParams() is a helper function that just returns a handful of numpy arrays. When I run this script I find that

grad = {'a': None, 'alpha': None}

I’ve read through the tutorials on using autograd and how tf.Variables work multiple times, but I can’t figure out what I’m doing wrong. Can anyone else spot my mistake?