How to quantize a custom model(not of tf.keras.Model type)?

I’m trying to perform quantization-aware training on a customized model that’s not of tf.keras.Model type. It’s got its own forward(), loss(), and trainable_variables() functions for the optimizer to apply gradients on. Has anyone had any experience running quantization aware training in such a scenario?

Thank you.