Freeze layers in Tensorflow Object Detection API

Is it possible to freeze specific layers when using Tensorflow Object Detection API?

For example, I am using EfficientDet downloaded from the Tensorflow 2 Model Zoo. When I train the model, I am going to make it predict whether an object is car,plane or motorcycle. The model is already trained on these types of objects (COCO 2017 dataset), but I want to train it more on my specific use case . But, since it is already trained on these types of object I do not want it to “forget” what it has already learned. Hence, I need to freeze some of the layers. So, is this possible? And, if it is possible, how do I know which layers I actually need to freeze? I have found that I might can use the freeze_variables parameter in the pipeline.config file?

Thanks for any help!

This might be what you’re after
Transfer Learning example
Specifically these lines:

base_model.trainable = True
# Let's take a look to see how many layers are in the base model
print("Number of layers in the base model: ", len(base_model.layers))

# Fine-tune from this layer onwards
fine_tune_at = 100

# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:fine_tune_at]:
    layer.trainable =  False

The problem with this is that it is not a Keras model. So as far as I can see it cannot be done like this when models are from the Model Zoo.

I dont think that model will forget these classes if you have them in your training data.

After I trained it on some data the accuracy of the model dropped about 20% for the same classes that I want to make prediction on. I would expect it to be around 94% which was the case before training. So something is going on.

If your model are from the Model Zoo, you also can use Transfer Learning, i.e. to continue train process, but with very small learning_rate, such as 10-7 or 10-8 …based on your train set. In this case, you will improve an accuracy of model, but by a little steps.

So if I reduce the learning rate it should work to train it on my own data without losing all its “progress”?

perhaps yes, it should be conceived as the classical approach