Regularization loss between network parameters

Hello, I would like to implement some ideas from the paper [1603.06432] Beyond Sharing Weights for Deep Domain Adaptation

This paper is about domain adaptation, where two networks are trained simultaneously. Both network weights should be related. This is achieved by using a weight regularizer. By using some kind of distance metric, it assures that the weights of both networks remain close. So the function takes as an input the weights of two nets. How can I implement this in Keras? Should I create a custom tf.keras.regularizers.Regularizer? Any help and ideas are highly appreciated.