Outdated FP16 Instructions on Blog Post

Reference: TensorFlow Model Optimization Toolkit — float16 quantization halves model size — The TensorFlow Blog

In Section “How to enable post-training float16 quantization”

In Line 4:

converter.target_spec.supported_types = [tf.lite.constants.FLOAT16]

should be:

converter.target_spec.supported_types = [tf.float16]

due to:

AttributeError: module 'tensorflow._api.v2.lite' has no attribute 'constants'

Cheers!