Help with Freezing TensorFlow Model

I am trying to freeze a model I trained based on a model from the Tensorflow Model Zoo but cannot figure out how to.


Things I have tried:

Used exporter_main_v2.py (models/exporter_main_v2.py at master · tensorflow/models · GitHub), as outlined in: TFODCourse/2. Training and Detection.ipynb at main · nicknochnack/TFODCourse · GitHub.

This caused the warning:

WARNING:tensorflow:Skipping full serialization of Keras layer <keras.layers.core.lambda_layer.Lambda object at 0x7fa3b11775d0>, because it is not built.

To be printed numerous times and the final result wasn’t readable when I tried to pass it into:

cv.dnn.readNetFromTensorflow

Tried to use freeze_graph.py, however one of the required parameters is the output node and I am unable to figure out how to get the output nodes from the model. Do note the model also does not have a .meta file. Here is a an image of the model’s directory structure for reference:


Model Used:
ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8

Training Script Used:
model_main_tf2.py


Image of directory:

freeze: contains the result of running exporter_main_v2.py
pre-trained: contains the model downloaded from tf2 model zoo
trained: contains the result of running the mode_main_tf2.py training script


If there is anymore information you need please let me know, I am new to this.

Your model contains Lambda layer. The documentation says that this layer is meant for quick experimentation and is likely to cause serialization problems. Try to subclass tf.keras.layers.Layer and implement the same logic there instead of Lambda.


I am able to load the model saved_model.pb found in saved_model folder in the pre-trained directory using:

tf.saved_model.load

However when using:

tf.keras.models.load_model

I get the error:


WARNING:tensorflow:SavedModel saved prior to TF 2.5 detected when loading Keras model. Please ensure that you are saving the model with model.save() or tf.keras.models.save_model(), NOT tf.saved_model.save(). To confirm, there should be a file named “keras_metadata.pb” in the SavedModel directory.
/usr/local/lib/python3.7/dist-packages/keras/layers/core/lambda_layer.py:299: UserWarning: google3.third_party.tensorflow.python.ops.nn_ops is not loaded, but a Lambda layer uses it. It may cause errors.
‘function_type’)
/usr/local/lib/python3.7/dist-packages/keras/layers/core/lambda_layer.py:299: UserWarning: google3.third_party.tensorflow_models.object_detection.models.feature_map_generators is not loaded, but a Lambda layer uses it. It may cause errors.
‘function_type’)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
in ()
----> 1 model = tf.keras.models.load_model("/content/tensorflow/workspace/pre-trained/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/saved_model")

1 frames
/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.traceback)
—> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb

/usr/local/lib/python3.7/dist-packages/keras/layers/core/lambda_layer.py in wrapper(*args, **kwargs)
203 # don’t want to incur the runtime cost of assembling any state used for
204 # checking only to immediately discard it.
→ 205 return
206
207 tracked_weights = set(v.ref() for v in self.weights)

NameError: Exception encountered when calling layer “Conv1_relu” (type Lambda).

name ‘dispatch’ is not defined

Call arguments received:
• inputs=tf.Tensor(shape=(None, None, None, 32), dtype=float32)
• mask=None
• training=None


I am not sure why I am getting this error.

Using the first method for loading in the model, I am unsure of how to extract information about which layers are lambda layers. I am also unsure of how to manipulate this model so that the lambda layers are replaced with tf.keras.layers.Layer instead. How can I do this? Thanks.

You can read about using and saving Lambda layers in this article: https://towardsdatascience.com/working-with-the-lambda-layer-in-keras-cfbaffdfc4c9
If you need to use Lambda layer in your model, the recommended way is to save only the weights and then initialize a new model using the same code and load saved weights into it.
To avoid such problems it’s better to implement custom operations by subclassing a base Layer.
You should look into the code, which was used for training and saving the model. If you don’t have this code, I’m not sure how to find out what custom operations are performed inside Lambdas. The error is complaining about some dependencies it can’t find, which are needed for these custom layers.