Is installing/using tensorflow with a gpu possible using pipenv?

pipenv seems like a nice Python environment manager, and I was able to set up and use an environment … until I tried to use my GPU with Tensorflow. I then received errors that libraries could not be dlopened. The error message said check that the libraries mentioned above were installed, but no libraries were mentioned. Error text from the command line test below.

My tensorflow / GPU install works fine using a conda environment.

Is it possible to use Tensorflow and CUDA/GPU in a pipenv environment?


Error message

❯ python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
2023-10-25 12:17:25.803005: I tensorflow/core/util/] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-10-25 12:17:25.821724: E tensorflow/compiler/xla/stream_executor/cuda/] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-10-25 12:17:25.821743: E tensorflow/compiler/xla/stream_executor/cuda/] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-10-25 12:17:25.821760: E tensorflow/compiler/xla/stream_executor/cuda/] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-10-25 12:17:25.825514: I tensorflow/core/platform/] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-25 12:17:26.192767: W tensorflow/compiler/tf2tensorrt/utils/] TF-TRT Warning: Could not find TensorRT
2023-10-25 12:17:26.572148: I tensorflow/compiler/xla/stream_executor/cuda/] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at
2023-10-25 12:17:26.583490: W tensorflow/core/common_runtime/gpu/] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...

Nvidia setup (currently training in the aforementioned conda env.)

❯ nvidia-smi
Wed Oct 25 13:09:48 2023
| NVIDIA-SMI 525.125.06   Driver Version: 525.125.06   CUDA Version: 12.0     |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|   0  NVIDIA RTX A4500    On   | 00000000:01:00.0  On |                  Off |
| 40%   66C    P2   117W / 200W |  19641MiB / 20470MiB |     91%      Default |
|                               |                      |                  N/A |

OS Setup

❯ neofetch --off
OS: Debian GNU/Linux 12 (bookworm) x86_64
Kernel: 6.1.0-13-amd64
Uptime: 8 days, 20 hours, 58 mins
Packages: 4291 (dpkg), 58 (flatpak)
Shell: zsh 5.9
Resolution: 2560x2880, 3840x2160, 3840x2160
DE: GNOME 43.6
WM: Mutter
WM Theme: Adwaita
Theme: Adwaita [GTK2/3]
Icons: Adwaita [GTK2/3]
Terminal: alacritty
CPU: 13th Gen Intel i9-13900K (32) @ 5.500GHz
Memory: 99342MiB / 128511MiB

Hi @John

Welcome to the TensorFlow Forum!

The error “libraries could not be dlopened” usually indicates missing or incompatible CUDA or cuDNN libraries which are essential for TensorFlow’s GPU support.

Please check the installed TensorFlow, Python version and supported libraries CUDA and cuDNN version are compatible as mentioned in this Tested build configuration and make sure the path set correctly to locate these libraries. Thank you.