Deep learning with float64 values

my idea is that I can use a slower learning rate, if I have float64 instead of float32 values. That way, I can apply more augmentation (gaussian noise) or dropout-regularization before the model reaches a minimum. This way (in theory) one would reach a flatter minimum with better generalization.

How could I implement transfer learning, with say Efficient-Net (Image classification via fine-tuning with EfficientNet) but exchanging the weights and biases to float64 and add such a layer on top and train with that datatype aswell?

If anyone can help me here, that would be great! Thanks.

Hi @Shavkat, To convert weights and bias to float64

base_model = EfficientNetB0(weights='imagenet', include_top=False, input_shape=(224, 224, 3))

base_model = tf.keras.models.clone_model(base_model)

for layer in base_model.layers:
    weights = layer.get_weights()
    w= [w.astype('float64') for w in weights]
    layer.set_weights(w)

and now you can add layers to that base model and train it. Thank You.