Using TFLite QuantizationDebugger from model_path

Hi!

I am testing the QuantizationDebugger functionality to inspect quantization errors of my models. I’ve successfully inspected a model defining the converter object, as shown in the Inspecting Quantization Errors with Quantization Debugger guide. I would like to also inspect models I’ve already generated (.tflite files), passing them through the quant_debug_model_path argument, and comparing against the floating-point model passed through float_model_path.

Unfortunately, when I execute

debugger = tf.lite.experimental.QuantizationDebugger(
    quant_debug_model_path='quantized.tflite',
    float_model_path='floating/'
)

it raises the error

ValueError: Please check if the quantized model is in debug mode

How can I set the debug mode of the quantized model?

I am using Tensorflow 2.14.1 in the docker image tensorflow/tensorflow:2.14.0-gpu.

I have seen that the floating-point model must be passed also as a .tflite. I changed my code to:

tf.lite.experimental.QuantizationDebugger(
    quant_debug_model_path = 'quantized_model.tflite',
    float_model_path = 'float_model.tflite'
)

I also tested passing the binary content (after they are converted and also after reading them from the .tflite files), as in:

tf.lite.experimental.QuantizationDebugger(
    quant_debug_model_content = quant_model,
    float_model_content = float_model
)

None of these methods worked, the ValueError: Please check if the quantized model is in debug mode keeps raising.

Any ideas of why this is not working?