Ubuntu 22.08 GPU Support problem in pycharm

Hello fellow humans, human fellas.
When trying to use GPU support in Pycharm using this code:

import tensorflow as tf

print('TensorFlow version:',tf.__version__)
physical_devices = tf.config.list_physical_devices()
for dev in physical_devices:
    print(dev)

sys_details = tf.sysconfig.get_build_info()
cuda_version = sys_details["cuda_version"]
print("CUDA version:",cuda_version)
cudnn_version = sys_details["cudnn_version"]
print("CUDNN version:",cudnn_version)
print(tf.config.list_physical_devices("GPU"))

I get :

/home/victor/miniconda3/envs/tf/bin/python /home/victor/PycharmProjects/Collect/Code/import tester.py 
2023-02-14 13:35:42.834973: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-02-14 13:35:43.820823: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libcudnn.so.8: cannot open shared object file: No such file or directory
2023-02-14 13:35:43.900520: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libcudnn.so.8: cannot open shared object file: No such file or directory
2023-02-14 13:35:43.900552: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
TensorFlow version: 2.11.0
2023-02-14 13:35:46.109811: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')
CUDA version: 11.2
CUDNN version: 8
[]
2023-02-14 13:35:46.133522: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudnn.so.8'; dlerror: libcudnn.so.8: cannot open shared object file: No such file or directory
2023-02-14 13:35:46.133541: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1934] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...

Process finished with exit code 0

I then went to Index of /compute/cuda/repos/ubuntu2004/x86_64
To get the missing libcudnn.so.8 (as far as I can read from the bugs the libinvfer.so.7 is a bug, it also does not exist on the website)
I then get the .deb file and run it with software installer. This tells me the library is already installed. So i went back to the terminal, went root, and did

(base) root@victor-ThinkPad-P53:~# sudo dpkg -i /home/victor/Downloads/libcudnn8-dev_8.1.0.77-1+cuda11.2_amd64.deb

That gave me this:

(Reading database ... 224085 files and directories currently installed.)
Preparing to unpack .../libcudnn8-dev_8.1.0.77-1+cuda11.2_amd64.deb ...
Unpacking libcudnn8-dev (8.1.0.77-1+cuda11.2) over (8.1.0.77-1+cuda11.2) ...
dpkg: dependency problems prevent configuration of libcudnn8-dev:
 libcudnn8-dev depends on libcudnn8 (= 8.1.0.77-1+cuda11.2); however:
  Version of libcudnn8 on system is 8.1.1.33-1+cuda11.2.

dpkg: error processing package libcudnn8-dev (--install):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 libcudnn8-dev

From what I can tell from this. The library exists in the correct folder, and should be readable from Pycharm.
Where it gets weird is that if i check for my GPU in the terminal. I can see it just fine.

tf.test.is_gpu_available('GPU');
WARNING:tensorflow:From <stdin>:1: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2023-02-14 13:08:14.691435: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-02-14 13:08:16.316234: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-02-14 13:08:17.759441: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1613] Created device /device:GPU:0 with 2628 MB memory:  -> device: 0, name: Quadro T1000, pci bus id: 0000:01:00.0, compute capability: 7.5
True

So Tensorflow can see the GPU from the same environment as I am running inside Pycharm (tf). But Tensorflow can’t see the GPU when I run a script from within the environment.
What do I do?

What i’ve tried.
Reinstall Ubuntu
Reinstall all drivers
Reinstall Tensorflow
Reinstall Pycharm.
Make new environment
Checked combability with Cuda, Cudnn, Nvidia-GPU-Driver

Hi @Mr_Sweaty ,

Welcome to the TensorFlow Forum!

Tensorflow with GPU is not configured correctly in your system which is causing the above error.
Please refer to the TF install official link to install tensorflow in your system by verifying the Hardware/Software requirements to setup GPU and then follow the step by step instructions mentioned in the link.

There is also known issue of "libdevice not found" which can be fixed by following the Step 6 mentioned in the instructed steps.

Please try again and let us know if the issue still persists. Thank you.