My setup consists of 4 Raspberry Pi’s (CPU) and I use MultiWorkerMirroredStrategy. The training seems to work fine and I use the TensorBoard Profiler to see the stats, however it says that the number of hosts used is only 1. Shouldn’t it say 4?
I did set the tf.config and all the workers do indeed wait for each other to start the training.
Also Device to Device Time is 0.0ms, which shouldn’t be the case. Is the Profiler not able to track Multi-Worker Training or is my training not actually utilizing multiple workers?