On Ubuntu 20.04 Tensorflow 2.10 as installed via pip tries to dlopen libnvinfer from TensorRT, which is different from the Windows pypi package (no TensorRT dependency) and more important it wants the outdated version 7 which is only available up to Ubuntu 18.04 and only compatible with CUDA up to version 11.2. Either the TensorRT dependency should not be there (as it was before and is still on Windows) or it should be against the current version 8.
Please refer to the latest update here
opened 03:58PM - 12 Sep 22 UTC
type:bug
comp:gpu:tensorrt
TF 2.10
<details><summary>Click to expand!</summary>
### Issue Type
Bug
### Source…
binary
### Tensorflow Version
tf 2.10
### Custom Code
No
### OS Platform and Distribution
Linux Ubuntu 20.04
### Mobile device
_No response_
### Python version
3.10
### Bazel version
_No response_
### GCC/Compiler version
_No response_
### CUDA/cuDNN version
11.7/8.5
### GPU model and memory
_No response_
### Current Behaviour?
```shell
On Ubuntu 20.04 Tensorflow 2.10 as installed via pip tries to dlopen libnvinfer from TensorRT, which is different from the Windows pypi package (no TensorRT dependency) and more important it wants the outdated version 7 which is only available up to Ubuntu 18.04 and only compatible with CUDA up to version 11.2. Either the TensorRT dependency should not be there (as it was before and is still on Windows) or it should be against the current version 8.
```
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
print(tf.config.list_physical_devices("GPU"))
```
### Relevant log output
```shell
2022-09-12 17:53:57.425679: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-09-12 17:53:57.566393: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2022-09-12 17:53:57.596092: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-09-12 17:53:58.180614: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2022-09-12 17:53:58.180675: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2022-09-12 17:53:58.180680: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2022-09-12 17:53:58.968890: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
2022-09-12 17:53:59.000480: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-09-12 17:53:59.000632: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:980] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
```
</details>
This issue is fixed in tf-nightly
and with the upcoming TF 2.12
.
Thank you!
1 Like