How to get GPU out of the way in predict calls?

,

Hello TF users,
I have a CNN model which is already fit, on a machine with a GPU.
Now I want to use this model on a machine, using the CPU only, but the machine has a GPU.
I disabled the GPU by using :
os.environ[“CUDA_DEVICE_ORDER”] = “PCI_BUS_ID” # see issue #152
os.environ[“CUDA_VISIBLE_DEVICES”] = “”
At the top of the module which is loading the classifier and making prediction.
Still, I see messages like :slight_smile:
2024-04-17 12:12:14.215413: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered,

Which means that somehow, the GPU is still addressed, somehow ???
How can I keep the GPU COMPLETELY out of the way, at ANY stage ?

Thanks in advance
Chris