Profiling Multi Node Multi GPU training

Hi Experts, I am trying to figure out how to profile a distributed training setup (using the MultiWorkerMirroredStrategy) ? I am using the tf.profiler.experimental.Profile,but I am not sure if it handles distributed training. I see no_of_hosts == 1. And only 1 worker produces the profiling logs. Please Suggest !