Requires_output_quantize from tfmot is not working as expected


I am working with tfmot to quantize some specific layers in my model. First, I did not annotate my specific layers. I do not understand why in this code here we check on NOT isinstance rather than isinstance. Because, I think if we use isinstance then the layer is of type QuantizeAnnotate and we will push it to requires_output_quantize as the name suggest? Coming to _quantize now, why do we need to verify if it is not in requires_output_quantize as this variable will hold layers that do not need to be quantized. I am confused and this sounds contradictory for me.

NB: I have re-implemented the _quantize() function like so:

def _quantize(layer):  # pylint: disable=missing-docstring
        if (
            ( not in layer_quantize_map)
            or (isinstance(layer, quantize_wrapper.QuantizeWrapper))
            or issubclass(type(layer), QuantizeLayer)
            # It supports for custom QuantizeWrapper.
            print(f"Layer is {layer.__class__}")
            return layer

        if in requires_output_quantize:
            if not quantize_registry.supports(layer):
                return layer
            full_quantize_config = quantize_registry.get_quantize_config(layer)
            if not full_quantize_config:
                return layer
            quantize_config = qat_conf.OutputOnlyConfig(full_quantize_config)
            quantize_config = layer_quantize_map[].get("quantize_config")
            if not quantize_config and quantize_registry.supports(layer):
                quantize_config = quantize_registry.get_quantize_config(layer)

        if not quantize_config:
            error_msg = (
                "Layer {}:{} is not supported. You can quantize this "
                "layer by passing a `` "
                "instance to the `quantize_annotate_layer` "
            raise RuntimeError(
                error_msg.format(, layer.__class__, quantize_registry.__class__)

        quantize_config = copy.deepcopy(quantize_config)
        return quantize_wrapper.QuantizeWrapperV2(layer, quantize_config)

I removed not in requires_output_quantize in the first if statement and it does work for me. But still do not understand why how this could work in general?