Nones returned in tape.gradient in Conv-TasNet Implementation

Hi all;

I am attempting to implement Conv-TasNet for a course project. My train_step code is at:

I’ve read the docs concerning possible causes of having all Nones returned by tape.gradient; I’ve made sure to remove any NumPy code from my custom loss function, and am not quite sure what else may be causing the issue.

I don’t think the error is caused by the loss function; I ran a debugging session with a built-in loss function, and the same result (all Nones) occurs.

The trainable_variables are tracked by tape, meaning the loss should be applied to a list of Variables as intended; so this should work, at least according to my knowledge of TensorFlow and Keras – which only spans about 6 weeks at the time of this posting.

I used the MelGAN example as a guide for how to implement a similar model; a main difference I can see in their code and mine is, I am not using (what appears to be) functional API concepts in my ConvBlock. Could that be the issue?

Regarding system/env:

Mac M1 Max
Python 3.9.7
TensorFlow/Keras 2.7

Conda environment is managed with miniforge.

Hi @beepbopboop

Welcome to the TensorFlow Forum!

The code link mentioned in this issue is not accessible. Please provide minimal reproducible code to replicate the error and fix it. Thank you

Hi Renu! Apologies. This was so long ago and I have deleted the repo.

Tanget –– how do I update my username?

You can check in your account summary - preferences - edit option.