Export/Build tf2 ssd-resnet model from ckpt.data to tflite

Hello,

i recently trained a ssd_resnet model for some custom classification.
In general everything worked out as i was able to run inference on the data which was produced by the training process.
What i want to do is to convert the model to a tflite model and compile it for a coral TPU.
What does not work atm: I cannot find any information on how to convert my model into a tflite model.

The training process was done using the script model_main_tf2.py from the object detection API.
This produces files as follows:

  • checkpoint
  • ckpt-XXX.data-00000-of-00001
  • ckpt-XXX.index
    (which is as far as i can tell everything)

I am aware that there are a lot of sites talking about converting modesl to tflite but none that work from the kind of checkpoints i have. (it always refers to model.ckpt files with some kind of meta file given which i guess is a tf1 format?)

What i tried:

  1. using the export scripts provided by the opject detection API (export_tflide_ssd_graph.py, export_tflite_graph_tf2.py) - both did not work as it says: “your model is not built”.

  2. loading and building the model and then manually exporting it:

    # load model
    configs = config_util.get_configs_from_pipeline_file(PATH_TO_CFG)
    model_config = configs['model']
    detection_model = model_builder.build(model_config=model_config, is_training=False)
    
    # Restore checkpoint
    ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
    ckpt.restore(os.path.join(PATH_TO_CKPT, 'ckpt-63')).expect_partial()
    detection_model.build((614,514))
    converter = tf.lite.TFLiteConverter.from_keras_model(detection_model)
    tflite_model = converter.convert()```
    

which results in the same problem telling me that the input is not set and that i could solve this by “building” the model. (which i thought i did…?)

So if anyone can shed some light on how this could work or where i can find any documentation on how to convert these checkpoints to a .tflite model i’d be really grateful. Let me know if i forgot to provide any information.

Cheers.

1 Like

so i was finally able to figure this out. Posting my own answer in a hope that it might help someone else:

the model gained from the learning process apparently is not a standard tensorflow or keras model and first must be exported using the script called “exporter_main_v2.py”.
This is relatively selfexplainatory as it takes the arguments --trained_checkpoint_dir, --pipeline_config_path and --output_directory

when this step is done a folder is created at the output location which should be called “saved_model”
The following python script can be used to transform this into a TF-Lite model:

MODEL_PATH  = "your/abs/path/to/folder/saved_model"
MODEL_SAVE_PATH = "your/target/save/location"
converter = tf.lite.TFLiteConverter.from_saved_model(MODEL_PATH ,signature_keys=['serving_default'])
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]

tflite_model = converter.convert()
with tf.io.gfile.GFile(MODEL_SAVE_PATH, 'wb') as f:
  f.write(tflite_model)

Hope this helps someone.

1 Like

@georg_laage It seems that with the latest version of TensorFlow 2.5.0 you do not need the below lines of code:

converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]

You can make the conversion like this code snippet.

Can you confirm if you have time?

1 Like

Hey @George_Soloupis,
just tested your code suggestion and it indeed leads to the same result.
Sadly i had to learn over the last days that a successful (or rather non-error) conversion of a model does not necessarily mean that one can actually use the result for anything.
If i use the model as followed:

interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()

input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.set_tensor(input_details[0]['index'], input_data)

interpreter.invoke()
print("Validation finished successfully.")

then i never get to the final print. The command interpreter.invoke() just randomly exits without any further message or error. I added a verification step to my model before the conversion and can thus confirm that the original model is correctly working.

I also had to learn that since i want to compile the tflite model for a coral-ai TPU i need to add full integer quantization to the tflite-conversion which completly breaks the process.
I’ll probably open a new discussion for that since this question as itself is answered.

1 Like

@georg_laage I am really interested in both problems!
That you never get a result of the interpreter and that you cannot do a full quantization. Please tag me whenever you need help.

1 Like

Hey George, thank you for your interest in my issue.
I was able to get a bit further with the help of a discussion in a github issue here:
github<dot>com/tensorflow/models/issues/9371 (“link” to issue, apparently i am not allowed to include links in my post)
My original error was to take the wrong script to export my checkpoints into saved_model format. I was using exporter_main_v2.py instead of export_tflite_graph_tf2.py. Since then the conversion and inference step actually works (in some manner as it terminates without error)
The resulting tflite is too small though as it only reaches 440 bytes. Apparently this is an issue when using wrong tensorflow versions but even after a fresh install of tensorflow in a virtual env the problem remains the same. This so far is without integer quantization.

3 Likes