Custom Initializer, Deserialising issues in Keras TF 2.15

I have some deserialisation issues, with TF 2.15 .

I implemented ICNR as a Warapper for any initializer, and I can not load the model once i saved it in .keras format. I get the following error:

x = self.initializer(new_shape, dtype)
TypeError: ‘dict’ object is not callable

The minimal sample to reproduce my error is the following one:

import tensorflow as tf
import numpy as np
from tensorflow.keras import layers, initializers, Input, Model, optimizers, saving
from tensorflow.python.layers.utils import normalize_tuple

class ICNR(initializers.Initializer):
    def __init__(self, initializer, scale=1, **kwargs):
        """ICNR initializer for checkerboard artifact free transpose convolution

        Code adapted from
        Discussed at
        Original paper:

        initializer : Initializer
            Initializer used for kernels (glorot uniform, etc.)
        scale : iterable of two integers, or a single integer
            Stride of the transpose convolution
            (a.k.a. scale factor of sub pixel convolution)
        self.scale = normalize_tuple(scale, 2, "scale")
        self.initializer = initializer

    def __call__(self, shape, dtype, **kwargs):
        # super().__call__(**kwargs)
        if self.scale == 1:
            return self.initializer(shape)
        size = shape[:2]
        new_shape = np.array(shape)
        new_shape[:2] //= self.scale
        x = self.initializer(new_shape, dtype)
        x = tf.transpose(x, perm=[2, 0, 1, 3])
        x = tf.image.resize(x, size=size, method="nearest")
        x = tf.transpose(x, perm=[1, 2, 0, 3])
        return x

    def get_config(self):
        config = {"scale": self.scale,
                  "initializer": self.initializer
        return config

factor = 2
num_filters = 64

inputs = Input(shape=(None, None, 3))
outputs = layers.Conv2D(num_filters * (factor ** 2), 3, padding="same", name="Upsample-1",
               kernel_initializer=ICNR(initializers.GlorotUniform(), factor),)(inputs)
model = Model(inputs, outputs)
model.compile(optimizer=optimizers.Adam(), loss="mean_squared_error")"test.keras", overwrite = True)
loaded_model = saving.load_model("test.keras", safe_mode=False, custom_objects={"ICNR": ICNR})

Can somebody tell me what I am doing wrong, I reused this passage - I used it in an old Keras version and it seamed to work back in the days…