Can't get Tensorflow GPU to work on WSL 2


I have installed the latest drivers, Cuda and Cudnn in the host machine and both Cuda and Cudnn inside the WSL.

If I use nvidia-smi I do see the GPU information, both outside and inside WSL:

sergio@DESKTOP-U0MALDT:~$ nvidia-smi
Thu Apr 25 18:24:10 2024
| NVIDIA-SMI 550.54.14              Driver Version: 551.78         CUDA Version: 12.4     |
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|   0  NVIDIA GeForce RTX 4070        On  |   00000000:07:00.0  On |                  N/A |
|  0%   43C    P5             23W /  200W |    3082MiB /  12282MiB |     22%      Default |
|                                         |                        |                  N/A |

| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|    0   N/A  N/A        28      G   /Xwayland                                   N/A      |

However, when getting the list of available of GPUs through Tensorflow, I get an empty array:

sergio@DESKTOP-U0MALDT:~$ python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
2024-04-25 18:29:16.082040: I tensorflow/core/platform/] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-04-25 18:29:16.671581: W tensorflow/compiler/tf2tensorrt/utils/] TF-TRT Warning: Could not find TensorRT
2024-04-25 18:29:17.248700: I external/local_xla/xla/stream_executor/cuda/] could not open file to read NUMA node: /sys/bus/pci/devices/0000:07:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-04-25 18:29:17.280233: W tensorflow/core/common_runtime/gpu/] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...

It tells me there are some missing libraries, however, I’ve installed everything listed in the tutorial and it is still giving me the very same output.

What should I do? How do I proceed from there?

1 Like

Welcome @SergioGMN to the TensorFlow Forum.

Unfortunately, the exact problems have been reported for Linux users as well!

A revised document addressing the issue of TensorFlow installation for Linux users with GPUs is pending review. Here is the link (pull request: #2299)

You could try the exact extra steps described in the document for Linux users.

The issue has been raised in GitHub #63362

I hope it helps!

1 Like