Fine tuning a model - inference mode affecting Dropouts as well?

I read the tutorial on fine tuning a model https://www.tensorflow.org/tutorials/images/transfer_learning#fine_tuning.

There it is said that the base_model should be set to inference mode (training = False when calling the base_model) for fine tuning, because of the BatchNormalization layers. As far as I saw and understood inference mode will also affect Dropout layers, if these are present in the base_model. Because of the inference mode the dropout would not get applied during fine tuning.

Is this the intended behavior for fine tuning or is it just a side effect, because the BatchNormalization layer require inference mode and actually it would be preferred to keep Dropout in training mode?

I would appreciate if someone could explain this further to me.

Hey @markdaoust , can you help here?

1 Like

I doubt you’d need to adjust them independently.

It’s hard to know for sure, you can always just try both and see what works for your use case.

If you do want to switch them independently, you can set the BarchNorm layers to layer.trainable=False to switch them to training=false while passing training-=True. Details are in the last section of the BatchNormalization docs

2 Likes

Thanks for the answer.