I am interested to scale the existing model with a “custom data loader” built on
tensorflow.keras.utils.Sequence for multi-GPU support. Can anybody share a few thoughts?
The “custom data loader” is built on
tensorflow.keras.utils.Sequence as opposed to
tf.dataset because of the nature of the dataset.
Following code is a minimal example.
The above example uses multiprocessing with a "custom data loader " on a single node with multiple CPUs. Is there a way I can scale it for a multi-GPU mirrored strategy with a “custom data loader” like in the example?
I dig a bit but most of the examples in official documentation use
tf.dataset for multi-GPU training which makes little complicated to adapt.