I have taken a deep dive into transfer learning and am stuck with a question. I have downloaded a pre-trained model with weights and this model has a default shape like for EfficientNetB4 (3803803). After the first stage of convolution comes the layer (190*190, 48) and so on. But if I change the input_shape to for example (200 * 200 * 3) then the whole model will change the dimensions of the inner layers, and after the first convolutional second layer will be in the shape of (100 * 100, 48) and so on. The question is how the loaded weights will change if the whole model changes the dimensions of the inner layers. I thought maybe the weights are just truncated, but input_shape can be increased and all inner layers will increase their size. Does anyone know the answer and can explain the process of converting the sizes of the pretrained weights?
There is no algorithm for changing the input shape for a given set of weights.
The standard practice is to resize your images to (380,380,3) so that they match what the model expects.