Error when loading a saved model

Hello communit,

i saved a model which i trained for a deep-q learning project. Tensorflow/keras 2.7.0 on Python 3.9.9 64bit on Windows 10. When i load the model again and use it for inference, it works fine.

It is not a very special model:

        model = Sequential()
        model.add(Conv2D(256, (3, 3), input_shape=self.env.OBSERVATION_SPACE_VALUES))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Conv2D(256, (3, 3)))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Flatten())  # this converts our 3D feature maps to 1D feature vectors
        model.add(Dense(self.env.ACTION_SPACE_SIZE, activation='linear'))
        model.compile(loss="mse", optimizer=Adam(learning_rate=LEARNING_RATE), metrics=['accuracy'])



model = keras.models.load_model(model_filename)

I want to use the model on a tf/keras 2.7.0 setup on a Raspberry Pi OS 64bit (bullseye) machine. When i use the same code for loading the model as on the Windows machine, i get the following error:

/usr/local/lib/python3.9/dist-packages/keras/saving/saved_model/ RuntimeWarning: Unexpected end-group tag: Not all data was converted

What does this mean?

Did you convert the model to TFLite before trying to load it on Raspberry Pi?

1 Like

No. I use normal tensorflow on the Raspberry Pi. But i think i found the issue, also not confirmed yet. I pushed it to my Git repo, and pulled it from there to the Raspberry Pi. When i just copy it bypassing Git, the model works.

Are there benefits using tensorflow lite on RPi?

Yes there are.
As RPi have so many HW restrictions, running a regular TF model can be too much work, that’s why TFLite was created, to run ML model on-device.

Please take a look here for more information on the benefits and how to convert your model: TensorFlow Lite | ML for Mobile and Edge Devices