Why `cuda` works with `torch` but not with `tensorflow`

I’m trying to employ my GPU card into Jupyter Notebook, and I got stuck with TensorFlow.
But I have succeeded with torch.

I have the following setup:


(myen2v) C:\Users\Jan>conda list cudnn
# packages in environment at D:\BitDownlD\Anaconda8\envs\myen2v:
#
# Name                    Version                   Build  Channel
cudnn                     8.9.2.26               cuda11_0    anaconda

(myen2v) C:\Users\Jan>conda list cuda
# packages in environment at D:\BitDownlD\Anaconda8\envs\myen2v:
#
# Name                    Version                   Build  Channel
cudatoolkit               11.8.0               hd77b12b_0

(myen2v) C:\Users\Jan>conda list torch
# packages in environment at D:\BitDownlD\Anaconda8\envs\myen2v:
#
# Name                    Version                   Build  Channel
pytorch                   2.0.1           cpu_py38hb0bdfb8_0
torch                     2.1.0                    pypi_0    pypi

(myen2v) C:\Users\Jan>conda list tensor
# packages in environment at D:\BitDownlD\Anaconda8\envs\myen2v:
#
# Name                    Version                   Build  Channel
tensorboard               2.13.0                   pypi_0    pypi
tensorboard-data-server   0.7.1                    pypi_0    pypi
tensorboard-plugin-wit    1.8.1            py38haa95532_0
tensorflow                2.13.0                   pypi_0    pypi
tensorflow-base           2.3.0           eigen_py38h75a453f_0
tensorflow-estimator      2.13.0                   pypi_0    pypi
tensorflow-gpu            2.3.0                    pypi_0    pypi
tensorflow-gpu-estimator  2.3.0                    pypi_0    pypi
tensorflow-io-gcs-filesystem 0.31.0                   pypi_0    pypi

I can run this code:


# Create tensors on GPU
a = torch.tensor([1, 2, 3], device="cuda")
b = torch.tensor([4, 5, 6], device="cuda")

# Perform operations on GPU
c = a + b
print(c)
tensor([5, 7, 9], device='cuda:0')

But I’m unable to run the code below:

import tensorflow as tf

physical_devices = tf.config.list_physical_devices('GPU')
print("Num GPUs:", len(physical_devices))

I’m getting this error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[12], line 1
----> 1 physical_devices = tf.config.list_physical_devices('GPU')
      2 print("Num GPUs:", len(physical_devices))

AttributeError: module 'tensorflow' has no attribute 'config'

Even this doesn’t work

import tensorflow as tf
print("Num of GPUs available: ", len(tf.test.gpu_device_name()))
--------------------------------------------------------------------------- AttributeError                            Traceback (most recent call
> last) Cell In[13], line 2
>       1 import tensorflow as tf
> ----> 2 print("Num of GPUs available: ", len(tf.test.gpu_device_name()))
> 
> AttributeError: module 'tensorflow' has no attribute 'test'

Hi @Jan_Pax, As per the test build configuration 2.13 supports cuDNN 8.6, but i can see that you are using 8.9. Please refer to this installation guide and follow the steps for smooth installation. Thank You.

Some magic must have happened. After downgrading cudnn tensorflow could not have even been imported. Then installing the last version of cudnn again,

import tensorflow as tf # TensorFlow registers PluggableDevices here. 
tf.config.list_physical_devices()

suddenly writes the correct result:
[PhysicalDevice(name=‘/physical_device:CPU:0’, device_type=‘CPU’)]

How is that possible ?

I have misread the output. The tensorflow still uses CPU. Why ? How do I downgrade to cuDNN 8.6 ?