Regularization loss between network parameters

Hello, I would like to implement some ideas from the paper [1603.06432] Beyond Sharing Weights for Deep Domain Adaptation

This paper is about domain adaptation, where two networks are trained simultaneously. Both network weights should be related. This is achieved by using a weight regularizer. By using some kind of distance metric, it assures that the weights of both networks remain close. So the function takes as an input the weights of two nets. How can I implement this in Keras? Should I create a custom tf.keras.regularizers.Regularizer? Any help and ideas are highly appreciated.

Hi @Manu

Welcome to the TensorFlow Forum!

Yes, you can try by defining the Custom Regularizer to calculate the distance between corresponding weights in the two models and then can apply the regularizer to specific layers in both models using the kernel_regularizer argument. Please refer to the Scalable model compression doc for the reference. Thank you.