TF Dual-GPU memory allocation

Hello all,

I am using Tensorflow 2.6 with two Nvidia A6000. I have set TF_FORCE_GPU_ALLOW_GROWTH to True. When I run python and create a tensor, for example:

import tensorflow as tf
a = tf.random.normal((10000,10000))

I see that a fraction is being allocated in the second GPU also. Is this behavior intented or is something wrong with my TF setup?

Thanks in advance for any assistance!

Hi @mkav, By default, TensorFlow maps nearly all of the GPU memory of all GPUs (subject to CUDA_VISIBLE_DEVICES) visible to the process. This is done to more efficiently use the relatively precious GPU memory resources on the devices by reducing memory fragmentation. To limit TensorFlow to a specific set of GPUs, use the tf.config.set_visible_devices method. Thank You.