Doesn't TFLite-micro's Dequantize operation support float16?

I built and trained a CNN model using TensorFlow, and used the TensorFlow Lite Converter to convert the trained model into a TFLite model.

# Create a model using high-level tf.keras.* APIs
model = tf.keras.models.load_model('./ECG_net_tf.h5')

# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()

# Save the model.
with open('saved_models/model.tflite', 'wb') as f:
  f.write(tflite_model)

I compiled and successfully ran the model in TFLite-Micro.

I noticed that it’s possible to perform float16 quantization when using the Converter.

# Create a model using high-level tf.keras.* APIs
model = tf.keras.models.load_model('./saved_models/ECG_net_tf.h5')

# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)

#float16 Quantization
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]

tflite_model = converter.convert()

# Save the model.
with open('saved_models/model.tflite', 'wb') as f:
  f.write(tflite_model)

The quantized model compiled successfully in TFLite-Micro, but encountered errors during runtime.

tensorflow/lite/micro/kernels/dequantize_common.cc:45 input->type == kTfLiteInt8 || input->type == kTfLiteInt16 || input->type == kTfLiteUInt8 was not true.
Node DEQUANTIZE (number 0f) failed to prepare with status 1
Segmentation fault

Based on the error message, I located the problematic area in the file tensorflow/lite/micro/kernels/dequantize_common.cc, where the input type check for the Dequantize operation on line 43 seems to only support Int8, Int16, and Uint8.

  TF_LITE_ENSURE(context, input->type == kTfLiteInt8 ||
                              input->type == kTfLiteInt16 ||
                              input->type == kTfLiteUInt8
                              );

However, in a TFLite model quantized to float16, the input for the Dequantize operation is of float16 type. Does this mean that TFLite-Micro does not support models quantized to float16?

Hi @yi_zhu,

Please check your target device supporting data type. If it is int8, you need to convert your model to tflite using full integer quantization. Right now, TFLite Micro doesn’t support dequantization of float 16 data types. ‘float16’ is mainly for the targets, which have high computational power and memory.

Thank You