How can I achieve distributed deep learning for computer vision?

I am Ph.D. student in Economics, and currently working on project of deep learning-based object detection from satellite imagery.
I am struggling to get an idea to conduct distributed deep learning on my current task. I have access to PySpark 2.4.0 and Tensorflow 2.4.1 at my institute. I was wondering if there is useful resources or tutorials that can help me apply these tools to my work.
Specifically, I’m interested in learning about the common approaches used in distributed deep learning frameworks for image processing or computer vision tasks, especially when it comes to working with Tensorflow or Keras.
Plus, I also would like to know whether PySpark is good or common method to load image data from HDFS for building object detection models, and when I conver Pyspark.dataframe data format to proper data format for Tensorflow, does distributed or parallelized nature remain?
Also, can I achieve these goals without utilizing separate libraries such as TensorflowonSpark or Horovod?

1 Like

Hi @T.H_Yoon, In tensorflow you can use distributed training to train the model on multiple devices. Thank You.