Tensorflow on WSL2 cannot find my gpu, despite following the guide for tensorflow2.12

I have followde the tutorial at the following link : Installer TensorFlow avec pip
and yet, I still cannot find my gpu using the command, as shown bellow

(tf) neizo@Legion:/mnt/c/Users/Neizo$ python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
2023-04-03 18:04:09.520372: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-03 18:04:10.132026: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-04-03 18:04:10.817814: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:982] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-03 18:04:10.853503: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1956] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
[]

I have tried googling it up, but I couldn’t really find anything.

I have rebooted my system, and still doesn’t work. I am on a lenovo legion 5 pro 16’, with an rtx 3070.

Thanks for the help

EDIT: If this may help, here is the output of nvidia-smi

(tf) neizo@Legion:/mnt/c/Users/Neizo$ nvidia-smi
Tue Apr  4 09:23:28 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 530.41.03              Driver Version: 531.41       CUDA Version: 12.1     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                  Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3070 L...    On | 00000000:01:00.0  On |                  N/A |
| N/A   27C    P8               10W /  N/A|    424MiB /  8192MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

@Neizo
After trying a bunch of different things, I found the solution and posted it on StackOverflow.
Link :

1 Like

@Neizo,

Welcome to the Tensorflow Forum!

Configure the system paths as shown below and let us know?

mkdir -p $CONDA_PREFIX/etc/conda/activate.d
echo 'CUDNN_PATH=$(dirname $(python -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)"))' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/:$CUDNN_PATH/lib' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh

Thank you!

@chunduriv
Thanks for the instructions above, however I am using pyenv and I tried to update my commands accordingly but tensorflow still claims that it is unable to register cuDNN.

Here is what I added to my bash, when I am using the default python environment using pyenv.
export CUDNN_PATH=$(dirname $(python -c “import nvidia.cudnn;print(nvidia.cudnn.file)”))
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pyenv prefix)/lib/:$CUDNN_PATH/lib

and below are the errors -

python3 -c “import tensorflow as tf; print(tf.config.list_physical_devices(‘GPU’))”
2024-02-03 00:12:19.632471: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2024-02-03 00:12:19.652827: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-02-03 00:12:19.652877: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-02-03 00:12:19.653387: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-02-03 00:12:19.656488: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-02-03 00:12:20.097657: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-02-03 00:12:20.489245: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:887] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-02-03 00:12:20.504268: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:887] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-02-03 00:12:20.504329: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:887] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
[PhysicalDevice(name=‘/physical_device:GPU:0’, device_type=‘GPU’)]