GPU available but cuDNN not enabled?

I have installed tensorflow-gpu, cudatoolkit and cuDNN. When I run following code

import tensorflow as tf

# Print TensorFlow version
print("TensorFlow version:", tf.__version__)

# Check if GPU is available
if tf.test.is_gpu_available():
    print("GPU is available.")
    # Check if cuDNN is enabled
    if tf.config.experimental.list_physical_devices('GPU')[0].name.startswith("GPU"):
        print("cuDNN is enabled.")
    else:
        print("cuDNN is NOT enabled.")
else:
    print("No GPU available.")

I am getting output

TensorFlow version: 2.6.0
GPU is available.
cuDNN is NOT enabled.

I searched over net how to enable cuDNN but did not find any valid result. What is wrong with my installation, how to enable cuDNN?

Hi @Paras_Salunkhe, tf.config.list_physical_devices(‘GPU’)[0].name gives the output as /physical_device:GPU:0. so if you apply startswith(“GPU”) the output will be false. so else condition will execute. Please try with the below code

import tensorflow as tf

# Print TensorFlow version
print("TensorFlow version:", tf.__version__)

# Check if GPU is available
if tf.test.is_gpu_available():
    print("GPU is available.")
    # Check if cuDNN is enabled
    if tf.config.experimental.list_physical_devices('GPU')[0].device_type.startswith("GPU"):
        print("cuDNN is enabled.")
    else:
        print("cuDNN is NOT enabled.")
else:
    print("No GPU available.")

#OutPut:
TensorFlow version: 2.12.0
GPU is available.
cuDNN is enabled.

Thank You!

It worked…thank you.