I am following the Step-by-step instructions to install TensorFlow on GPU in a Linux machine.
But after running the commands:
CUDNN_PATH=$(dirname $(python -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)"))export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/:$CUDNN_PATH/lib
I am getting the following error when try to run other commands like nano or htop:
nano: error while loading shared libraries: libtinfow.so.6: cannot open shared object file: No such file or directory
Does anyone know how to get around this problem?
You don’t say which step-by-step instructions you are following. I suggest you provide a link so that others can see where you are in the process.
Having installed TensorFlow on GPU recently using conda and pip I had several problems that took a while to overcome. But they mostly related to my NVIDIA installation looking on system paths rather than on conda paths.
One reason nano or htop would have problems could be if you installed them in your conda environment. Then they would look in the conda environment and not on your system for shared libraries.
You can use the “which <<software_name>>” command inside your conda environment to find out where your packages are installed e.g. “which nano”
If it is the case that you’ve installed this software under conda then remove tha packages under conda, deactivate conda and then use apt install to install the affected packages on your system.
Thank you for your answer. Here is the link for the Step-by-step that I am following: Install TensorFlow with pip
I have checked the path for nano and htop and they are not under the conda environment.
I followed a slightly different guide, but can’t see any reason you should have the problems you have. I can’t see why this command would cause problems with commands like nano and htop.
Do these problems with nano and htop only happen under conda, or also when conda is deactivated?
Are you sure you didn’t make any changes to your system other than this command?