Use gpu to accelerate data preprocessing in google colab (Python)

I am trying to train my model. The input data consists of similar patches. Generating similar patches (8x8x10) function takes a very long time, even when Colab is connected to a GPU (but not used).

Patches_label = img_to_Patches(images_label, sP, nSP, offset)
# This line takes very long
Patches_in = SearchSimilarPatches(images_in, sP, nSP, sW, offset)

history = model.fit(Patches_in, Patches_label, steps_per_epoch=2000, epochs=400, verbose=1, initial_epoch=initial_epoch, callbacks=[checkpointer,csv_logger,lr_scheduler])

Colab notified me that I am connected to a GPU, but it is not in use.

@samo_timy,

If you would like a particular operation to run on a device of your choice instead of what’s automatically selected for you, you can use with tf.device to create a device context, and all the operations within that context will run on the same designated device.

Please refer below example using the tf.device context manager

import time
import tensorflow as tf

def time_matmul(x):
  start = time.time()
  for loop in range(10):
    tf.linalg.matmul(x, x)

  result = time.time()-start

  print("10 loops: {:0.2f}ms".format(1000*result))

# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
  x = tf.random.uniform([1000, 1000])
  assert x.device.endswith("CPU:0")
  time_matmul(x)

# Force execution on GPU #0 if available
if tf.config.list_physical_devices("GPU"):
  print("On GPU:")
  with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
    x = tf.random.uniform([1000, 1000])
    assert x.device.endswith("GPU:0")
    time_matmul(x)

Output:

On CPU:
10 loops: 45.80ms
On GPU:
10 loops: 310.99ms

Thank you!

Thanks so much for your reply