How to quantize a custom model(not of tf.keras.Model type)?

Hi,
I’m trying to perform quantization-aware training on a customized model that’s not of tf.keras.Model type. It’s got its own forward(), loss(), and trainable_variables() functions for the optimizer to apply gradients on. Has anyone had any experience running quantization aware training in such a scenario?

Thank you.

Hi @Felicity_Wang, To perform QAT on custom layers you have to apply tfmot.quantization.keras.quantize_annotate_layer to the CustomLayer and pass in the QuantizeConfig. For more details please refer to this document. Thank You.

1 Like