Outdated FP16 Instructions on Blog Post

Reference: TensorFlow Model Optimization Toolkit — float16 quantization halves model size — The TensorFlow Blog

In Section “How to enable post-training float16 quantization”

In Line 4:

converter.target_spec.supported_types = [tf.lite.constants.FLOAT16]

should be:

converter.target_spec.supported_types = [tf.float16]

due to:

AttributeError: module 'tensorflow._api.v2.lite' has no attribute 'constants'

Cheers!

Hi, @leonh

Welcome to TensorFlow Forum

Thank you for bringing this issue to our attention and as far my understanding to specify the half-precision floating-point data type in TensorFlow 2.x onwards, please use tf.float16 instead of tf.compat.v1.lite.constants.FLOAT16 which is used for quantizing the model to 16-bit floating-point format during conversion in TensorFlow Lite

Please refer to updated official documentation of Post-training float16 quantization and I see you’re referring to old Tensorflow-Lite Blog it seems like outdated so will have internal discussion with TensorFlow Lite Team about this and if possible will try to update that outdated blog as soon as possible.

Thank you for your understanding and patience.