'function_optimizer.py' returns empty graph


I am trying to run a SavedModel that I have exported from a checkpoint using ‘exporter_main_v2.py’ and during conversion, the ‘function_optimizer.py’ optimizer seems to return an empty graph (0 nodes). Is there any reason that it does that?


More context: I am trying to convert a MobilenetV2 SSD re-trained SavedModel into another framework format. I have successfully re-trained the model and exported the checkpoint to a SavedModel using exporter_main_v2.py. However, when I try to run the conversion script, it calls the 'meta_optimizer.cc file, which eventually results in an empty graph that cannot be converted.
The following is the output that I get:

What is the conversion script?

I cannot provide that because it is propietary. However, I believe the issue is with the exporting from the re-trained checkpoints to the SavedModel that is causing this issue. The pre-trained object-detection models in the TF2 Zoo contain both the checkpoints and the saved_model.pb. If i try to convert the pre-trained saved_model.pb, I face no issues at all. However, if I try to export the pre-trained checkpoint (that I have just downloaded) into a *SavedModel * and then try to convert that, I get the error I just mentioned. So it has to be doing something wrong in the checkpoint to SavedModel step right (exporter_main_v2.py)?

Do you mean that saved_model.pb in the official repo is not the same as the one generated with exporter_main_v2.py using the checkpoint in the official repo?

That’s the only explanation! Because the saved_model.pb that exists in the official repo can be converted with no issues and doesn’t result in 0 nodes and 0 edges when I try to convert it to the other framework, unlike the saved_model.pb that is generated with exporter_main_v2.py using the checkpoint in the official repo.

Is TF2.x model zoo or TF1.x?

In the TF2.x model zoo.

Any thoughts on why this could be happening? It might be something that I have to change in the config file perhaps, but I’m using the config file in the official repo so I wouldn’t know what I would need to change. Do the saved_model.pb files in the official repos use those same config files? And do they also use the same exact exporter_main_v2.py command? It would be really useful if I could know what command is used exactly along with its arguments.

@thea Do you have a contact in the model garden team to collect these details?

@thea I would really appreciate any help or guidance that I can get for this.

It’s either that I’m performing the exporting process from checkpoints to a SavedModel wrong (using exporter_main_v2.py), or that some kind of post-exporting optimization is done to the final product which is then used in the official TF2 Model Zoo repos. I’m not really sure what to do from here.

@Bhack @thea no input on this at all? I would really appreciate any help I can get with this, please.

Honestly I don’t know if any maintainer is in the Forum:

/cc @markdaoust What do you think?

Ooof. There are a dozen different files there called “export_*.py”.

My first guess here is that the provided saved_models are tf1 format and you’re saving in tf2 format. And something in your conversion script isn’t quite translating.

Throwing out all the nodes sounds like something it would do if the inputs/outputs weren’t properly registered in the saved model signature.

Have you tried inspecting the saved_model? can you reload and run it correctly from python?

There’s a similar bug here where it breaks in a while-loop but IIRC lite has fixed their control-flow support since them:

1 Like

I made sure to be using models from the TF2 Zoo with the tutorial for TF2 OD. I am able to perform inference on my re-trained saved_model.pb and properly load it with no issues at all. However the difficulty I am facing is with converting it to a different proprietary format. As mentioned before, I can convert the saved_model.pb in the official repo, but not my trained saved_model.pb.

I suggest you to open a ticket in the model repository and post the link link to the ticket here as we need to understand how the file was exported in the zoo.
I don’t think that Model garden maintainers are (still?) subscribed to the model_gsarden tag in this threads.

1 Like

I have posted this issue on both the ‘tensorflow/tensorflow/’ and ‘tensorflow/models’ repositories. Here are the links for both:


I think that for now in model is enough as It is better to not open duplicate issues in the ecosystem.

As all the Ecosystem repositories (if we exclude Keras) are under the same Github org we could move the tickets across repos if required.

1 Like

Okay, I closed the issue under ‘tensorflow/tensorflow’. Will be waiting for a response. Thank you