Single-machine multi-GPU training

In TensorFlow 2.xx, when conducting single-machine multi-GPU training using the fit function, does the function automatically slice the dataset into multiple parts and distribute them across different GPUs? Is it still necessary to use the strategy.experimental_distribute_dataset function to split the sample data?

Hi @2711099209, As per my knowledge strategy.experimental_distribute_dataset is used when you are implementing a distributed strategy using a custom training loop.

strategy.experimental_distribute_dataset help to customize the data splits. Thank You.