Convert TFLite Model Maker Object detection model to OpenVino

I was wondering how I can convert a Tensorflow Lite object detection model created with the TFLite Model Maker. I don’t think this is possible after exporting the model to Tensorflow Lite but it should work if the model is exported as a saved model.

Any help is greatly appreciated.

2 Likes

Hi @Gi_T

After training the model with create() method you can do:

serving_model = model.create_serving_model()

print(f'Model\'s input shape and type: {serving_model.inputs}')
print(f'Model\'s output shape and type: {serving_model.outputs}')

and then save it to saved model format as always:

saved_model_path = './object_model_maker'
serving_model.save(saved_model_path, include_optimizer=False)

Check a workflow here with an audio classification example (it is still an ongoing project).

2 Likes

@George_Soloupis’s workflow is the recommended one. As long as you can create a SavedModel for your OD network and ensure it’s supported in OpenVINO you should be good to go. However, I must mention that OpenVINO’s TensorFlow 2 support is still very experimental and limited. So you might want to keep that in mind.

3 Likes

Thanks for the replies. I tried the code suggested by @George_Soloupis, but I couldn’t get the conversion to work. I’m currently getting the following error:

[ FRAMEWORK ERROR ]  Cannot load input model: TensorFlow cannot read the model file: "/content/object_model_maker/saved_model.pb" is incorrect TensorFlow model file. 
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message. 
 For more information please refer to Model Optimizer FAQ, question #43. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=43#question-43)

Unfortunately, it seems like I can’t embed any links, so I can’t share my current notebook with you, but after creating the saved model, I’m using the following code to convert the model:

output_dir = '/content/output'

!source /opt/intel/openvino_2021/bin/setupvars.sh && \
    python /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py \
    --input_model /content/object_model_maker/saved_model.pb \
    --transformations_config /opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/automl_efficientdet.json \
    --reverse_input_channels \
    --output_dir {output_dir} \

Any help is greatly appreciated.

1 Like

It seems like even when someone saves the model maker ‘model’ in saved _model format with the provided code here:
model.export(export_dir='.', export_format=[ExportFormat.SAVED_MODEL, ExportFormat.LABEL])
and then tries to reload it like:
reloaded_model = tf.saved_model.load('./object_detection_model_maker_saved_model/saved_model) reloaded_model.summary()

it throws an error. I have also checked it with the audioclassifier example.

I hope @Yuqi_Li can sed some light here.

1 Like

Yes, please just use model.export(export_dir='.', export_format=ExportFormat.SAVED_MODEL) to export as saved_model.

2 Likes

I now used model.export(export_dir='.', export_format=ExportFormat.SAVED_MODEL) as suggested but when running

reloaded_model = tf.saved_model.load('/content/saved_model') 
reloaded_model.summary()

I’m getting the following error:

AttributeError                            Traceback (most recent call last)
<ipython-input-13-38f2d2297174> in <module>()
      1 reloaded_model = tf.saved_model.load('/content/saved_model')
----> 2 reloaded_model.summary()

AttributeError: '_UserObject' object has no attribute 'summary'

Also, while trying to convert the model to OpenVino:

output_dir = '/content/output'

!source /opt/intel/openvino_2021/bin/setupvars.sh && \
    python /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py \
    --input_model /content/saved_model/saved_model.pb \
    --transformations_config /opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/automl_efficientdet.json \
    --reverse_input_channels \
    --output_dir {output_dir} \

I’m still getting the following error message:

Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2021.3.394/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_tf2.sh
Note that install_prerequisites scripts may install additional components.
[ FRAMEWORK ERROR ]  Cannot load input model: TensorFlow cannot read the model file: "/content/saved_model/saved_model.pb" is incorrect TensorFlow model file. 
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message. 
 For more information please refer to Model Optimizer FAQ, question #43. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=43#question-43)
2 Likes

Tag @Yuqi_Li at your previous answer.
This is also what I have noticed! summary() is not working after reloading the model and throws the same error:
AttributeError: '_UserObject' object has no attribute 'summary'

1 Like

I see. Not sure why it happened. While it can still be used to load the model and run inference.

I think you can track this at: