my idea is that I can use a slower learning rate, if I have float64 instead of float32 values. That way, I can apply more augmentation (gaussian noise) or dropout-regularization before the model reaches a minimum. This way (in theory) one would reach a flatter minimum with better generalization.
How could I implement transfer learning, with say Efficient-Net (Image classification via fine-tuning with EfficientNet) but exchanging the weights and biases to float64 and add such a layer on top and train with that datatype aswell?
If anyone can help me here, that would be great! Thanks.