Help wanted: best way to get TF, Python, GCC and CUDA versions matched for installation

Hello Community

1st) Pls tell me if I am completely at the wrong place with my request/question

2nd) My Question is:

Which is the best/most easiest way to setup a TF/CUDA env on Debian Buster?

May I describe my difficulties at first:

There is (a very, very special) dependency in setting both successfully up on a (Debian) Linux System: (on which’s investigations not always lead to a successful one)

  • GCC used/requested in TF - CUDA working only w/ spec. GCC which is in contrast to TF - std. inst. of GCC on Linux system which differs very often to both

→ How to find out which CUDA incl. GCC works with which version of TF AND the GCC version on Linux system? And, which may be the traps/pit falls I may stumble in?

  • There are also (nearly the same very special) dependencies concerning the used/necessary python/pip versions to be installed/already installed.

→ So, the same question here belonging to Python

  • There is also a specialty regarding the way/prereq’s on how to run the ‘.run’ file of CUDA

Once done wrong, it is very hard to find all the places where there had been tracks/spots been left of a former setup to be get the system cleaned up very clearly and than to get a second try successfully run of the CUDA ‘.run’ file to get it installed.

Since I am not a developer it is very hard for me to setup CUDA together with TF successfully on Debian Buster to have it right at hand for teaching purposes.

So, which way of setting up those incl. correct dependencies would you suggest to be successful in this matter?

Thank you very much for any helpful response which will be highly appreciated.

Kind regards from Switzerland,
Roger

1 Like

Personally I prefer to directly use or build derived images from the official Dockerhub repo:

https://hub.docker.com/r/tensorflow/tensorflow/

2 Likes

Thank you for your response. Since I am not that aware of ‘building derived images’:
Could you pls provide me some information on how you/to do this?
And, in the images you mentioned by the link you gave, is there CUDA, Python, Tensorflow and all necessary stuff integrated into its/one image?
Or do I have to run each of them in a seperated container?

Regards
Roger

1 Like

You can create derived images with the FROM directive:

And, in the images you mentioned by the link you gave, is there CUDA, Python, Tensorflow and all necessary stuff integrated into its/one image

Yes, for GPU support use the GPU images:

1 Like