Distributed learning models

It could be really nice if we could launch an experiment/initiative with the Model Garden, Federated teams and our community on training our first model garden model with our federated tools.

We have some recent interesting experiment with other frameworks.

https://arxiv.org/abs/2106.10207

Hello, TensorFlow already does and you can use KubelFlow, ML overlay from Kubernetes…

I think that we are talking about something really different.
Take a look to the mentioned links.

More in general:

https://arxiv.org/abs/1912.04977

I am also looking forward to the federated learning with mobile devices and orchestration server. From my resent conversation with a ML compiler programmer, the tflite model runs on the mobile device are compiled, it would be difficult to decompile the tflite model to get the tf ml code back. Thus, it would be really difficult to retrain or using transfer-learning to modify the tflite model. Currently, i am using java/kotlin to build additional layer to the tflite model codes, so that the additional layer can be retrained on mobiel device. But still i have an issue to get the weights from mobile device aggregated, since it doesn’t really converge to an equilibrium.

Check:

Take a look at this NeurIPS 2021 experiment:

@Jason What do you think about this crowdtraining experiment by a TF.js point of view?

Also if different we had something for TF.js at: