TFLite conversion_Float16 quantization

I’m trying to conver my saved_model.pb to TFLite_Float16 quantization:

import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_quant_model = converter.convert()

However, I got error at the last line as

ConverterError Traceback (most recent call last)
in
3 converter.optimizations = [tf.lite.Optimize.DEFAULT]
4 converter.target_spec.supported_types = [tf.float16]
----> 5 tflite_quant_model = converter.convert()
6 open(“mytflite_float16_quantization.tflite”, “wb”).write(tflite_quant_model)

7 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/convert.py in convert(model_flags_str, conversion_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
309 for error_data in _metrics_wrapper.retrieve_collected_errors():
310 converter_error.append_error(error_data)
→ 311 raise converter_error
312
313 return _run_deprecated_conversion_binary(model_flags_str,

ConverterError: :0: error: loc(callsite(fused[“StridedSlice:”, “map/while/strided_slice@map_while_body_12840”] at callsite(callsite(fused[“StatelessWhile:”, “map/while@__inference_call_func_17832”] at fused[“StatefulPartitionedCall:”, “StatefulPartitionedCall@__inference_signature_wrapper_22210”]) at fused[“StatefulPartitionedCall:”, “StatefulPartitionedCall”]))): ‘tf.StridedSlice’ op is neither a custom op nor a flex op
:0: note: loc(callsite(callsite(fused[“StatelessWhile:”, “map/while@__inference_call_func_17832”] at fused[“StatefulPartitionedCall:”, “StatefulPartitionedCall@__inference_signature_wrapper_22210”]) at fused[“StatefulPartitionedCall:”, “StatefulPartitionedCall”])): called from
:0: note: loc(fused[“StatefulPartitionedCall:”, “StatefulPartitionedCall”]): called from
:0: note: loc(callsite(fused[“StridedSlice:”, “map/while/strided_slice@map_while_body_12840”] at callsite(callsite(fused[“StatelessWhile:”, “map/while@__inference_call_func_17832”] at fused[“StatefulPartitionedCall:”, “StatefulPartitionedCall@__inference_signature_wrapper_22210”]) at fused[“StatefulPartitionedCall:”, “StatefulPartitionedCall”]))): Error code: ERROR_NEEDS_FLEX_OPS
:0: error: loc(callsite(callsite(fused[“StatelessWhile:”, “map/while@__inference_call_func_17832”] at fused[“StatefulPartitionedCall:”, “StatefulPartitionedCall@__inference_signature_wrapper_22210”]) at fused[“StatefulPartitionedCall:”, “StatefulPartitionedCall”])): failed while converting: ‘map/while_body’:
Some ops are not supported by the native TFLite runtime, you can enable TF kernels fallback using TF Select. See instructions: Chọn toán tử TensorFlow  |  TensorFlow Lite
TF Select ops: StridedSlice
Details:
tf.StridedSlice(tensor<?x?x3xf32>, tensor<4xi32>, tensor<4xi32>, tensor<4xi32>) → (tensor<1x?x?x3xf32>) : {begin_mask = 14 : i64, device = “”, ellipsis_mask = 0 : i64, end_mask = 14 : i64, new_axis_mask = 1 : i64, shrink_axis_mask = 0 : i64}

:0: note: loc(fused[“StatefulPartitionedCall:”, “StatefulPartitionedCall”]): called from

Please help me to solve this issue.
Thank you.

Please add select ops before conversion as described below

import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
converter.target_spec.supported_ops = [
  tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops
  tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops
]
tflite_quant_model = converter.convert()

Thank you.