How do you export a frozen inference graph in Tensorflow 2.x Object Detection API?

I’ve been following along some tutorials in training a custom object detection model using Tensorflow 2.x Object Detection API.

Everything seems to work up until when I try exporting the trained inference graph. Basically, in TensorFlow 1.x, there is a script master/research/object_detection/export_inference_graph.py which is used to export the trained model checkpoints to a single frozen inference graph.

In TensorFlow 2.x, this script no longer works and instead, we use master/researchobject_detection/exporter_main_v2.py which outputs a SavedModel directory and some other stuff, but not the frozen inference graph. This is because in TF 2.x, frozen models have been deprecated.

I want to be able to retrieve the frozen inference graph from TensorFlow 1, in TensorFlow 2. I tried looking at this post Save, Load and Inference From TensorFlow 2.x Frozen Graph - Lei Mao's Log Book but I was encountering a “_UserObject has no attribute ‘inputs’” error.

Does anyone know how I can work around this error, or if there are any other solutions to export an object detection SavedModel into a single frozen inference graph?

Have you tried if this hack Is still working for your model?

I’ve tried both. The article I linked Save, Load and Inference From TensorFlow 2.x Frozen Graph - Lei Mao's Log Book describes this method. It resulted in “_UserObject has no attribute ‘inputs’” on the line

full_model = full_model.get_concrete_function(
tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))

My thought is that this script is for converting keras model, whereas the object detection model is possibly not keras. I’m not too sure

Ok, so I think probably your case it is in this old (and long) ticket thread on Github:

Why you want to use the frozen inference graph with TF2? What is your use case?

I previously used TensorFlow 1 with the export_inference_graph and performed inference on the frozen graphs. Now, I’m attempting to migrate the scripts I used to TensorFlow2 but the inference scripts are still TensorFlow 1 for now, so I wanted to find a way to train models in TensorFlow2 and then still be able to perform inference using the TensorFlow 1 scripts

Can you inference from saved model like in:

I’ll look into it. In general I was just wondering if there was any possible way that freezing the graph can be done. It seems like the medium articles you linked above should work but I encountered the “_UserObject has no attribute ‘inputs’” when trying to run the scripts

I think that if this doesn’t create too much rewriting for inference code in the last Colab you could try to work directly from the saved model that it is more natural with TF2.

I would like to chime in. I also want to convert a TF Saved Model to a frozen graph, and I found a way to do it. Plz see Can't Convert tensorflow saved_model to frozen inference graph · Issue #8966 · tensorflow/models · GitHub.