Should 'gamma' and 'beta' be skipped in L2 regularization?

Hi all,
I was going through Matterport mask-rcnn code. They are skipping ‘gamma’ and ‘beta’ of Batch norm from L2 regularization.

        # Add L2 Regularization
        # Skip gamma and beta weights of batch normalization layers.
        reg_losses = [
            keras.regularizers.l2(self.config.WEIGHT_DECAY)(w) / tf.cast(tf.size(w), tf.float32)
            for w in self.keras_model.trainable_weights
            if 'gamma' not in w.name and 'beta' not in w.name]

Any reason for this?

Hi @Vedanshu

Could you please let us know if this issue still persists? If so, Please provide minimal reproducible code to replicate along with some more details on the issue for better understanding. Thank you.