Tensorflow on WSL2 cannot find my gpu, despite following the guide for tensorflow2.12

I have followde the tutorial at the following link : Installer TensorFlow avec pip
and yet, I still cannot find my gpu using the command, as shown bellow

(tf) neizo@Legion:/mnt/c/Users/Neizo$ python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
2023-04-03 18:04:09.520372: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-03 18:04:10.132026: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-04-03 18:04:10.817814: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:982] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-04-03 18:04:10.853503: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1956] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...

I have tried googling it up, but I couldn’t really find anything.

I have rebooted my system, and still doesn’t work. I am on a lenovo legion 5 pro 16’, with an rtx 3070.

Thanks for the help

EDIT: If this may help, here is the output of nvidia-smi

(tf) neizo@Legion:/mnt/c/Users/Neizo$ nvidia-smi
Tue Apr  4 09:23:28 2023
| NVIDIA-SMI 530.41.03              Driver Version: 531.41       CUDA Version: 12.1     |
| GPU  Name                  Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|   0  NVIDIA GeForce RTX 3070 L...    On | 00000000:01:00.0  On |                  N/A |
| N/A   27C    P8               10W /  N/A|    424MiB /  8192MiB |      0%      Default |
|                                         |                      |                  N/A |

| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|  No running processes found                                                           |

After trying a bunch of different things, I found the solution and posted it on StackOverflow.
Link :


Welcome to the Tensorflow Forum!

Configure the system paths as shown below and let us know?

mkdir -p $CONDA_PREFIX/etc/conda/activate.d
echo 'CUDNN_PATH=$(dirname $(python -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)"))' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/:$CUDNN_PATH/lib' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh

Thank you!