Unfreezing base model ruins model accuracy on mobile device

Hi,

I have a model Ive trained using transfer learning. When I set the feature extractor layer to trainable the models val_accuracy during training greatly increases. When I test the outputted tensorflow lite model with tensorflow interpreter with python again the model performs very well. When I then put the same model in my android app the models accuracy is pretty much useless.

If I have the feature extractor layer set to false. The outputted tesnorflow lite model works perfectly fine on Android.

base_model_url = "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b0/classification/2"

feature_extractor_layer = hub.KerasLayer(
    base_model_url,
    input_shape=(img_height, img_width, 3),
    trainable=True)

Any ideas on why this is the case?

Managed to find a fix myself. For anyone having a similar issue I found using a slightly different method for transfer learning solved the problem. This tutorial gives a full overview on how to do it. Image classification via fine-tuning with EfficientNet .