How to activate Dropout during Serving?

Hello. I am deploying with TensorFlow Serving and want to enable dropout at inference time. I want it to behave exactly as when using it in regular tensorflow, setting training=True. Can I do this in the serving config file?

The motivation behind this is that I want to pass the same identical input multiple times in a single batch/request, so that I gain an uncertainty quantification.

I know there are other ways to do so, like sampling from the output of a softmax distribution, potentially with temperature, etc. But this really doesnt apply in my case. I have multiple outputs, and I want a joint Monte-Carlo distribution, as the outputs are correlated (I hope the reasoning was understandable, but it shouldnt take away from the main question.)