Seeking Clarity on TensorFlow Model Garden's Activity and Future Directions

Hello TensorFlow Community,

I hope this message finds you well. I wanted to reach out to inquire about the current status and future plans for the TensorFlow Model Garden (GitHub - tensorflow/models: Models and examples built with TensorFlow). As an enthusiast in the field of machine learning, particularly interested in generative models, domain adaptation, and contrastive learning, I’ve noticed a significant trend in the adoption of PyTorch for implementing and maintaining these models.

Platforms like Hugging Face and Facebook AI Research have been prolific in their contributions to the field, providing comprehensive implementations and tools, resulting in numerous papers and research leveraging PyTorch and their community resources.

However, I’ve found PyTorch to be somewhat unpolished, often requiring extensive manual configuration, especially when dealing with hardware utilization (CPUs/GPUs), distributed learning, and various backpropagation algorithms. On the other hand, TensorFlow offers a more polished environment for development, yet I’ve observed a lack of parallel implementations of many cutting-edge models compared to what’s available in PyTorch.

In light of this, I’ve taken the initiative to port some of these models into TensorFlow. Upon discovering the TensorFlow Model Garden, I was hopeful to find a vibrant community eager to embrace newer models and contributions. However, the activity in the repository seems relatively subdued.

My query to the community is two-fold:

  1. Is the TensorFlow Model Garden still an active community? Are there ongoing efforts to update and maintain it?
  2. Would it be advisable to contribute new models, particularly those that are currently dominant in the PyTorch ecosystem, to the TensorFlow Model Garden?

I believe fostering a strong TensorFlow community around these cutting-edge models could greatly benefit researchers, practitioners, and enthusiasts alike. Any insights, guidance, or suggestions on this matter would be immensely appreciated.

Thank you for your time and consideration.

Best regards,
Abhas Kumar Sinha

I am not sure what is the current status of TF Model Garden but IMHO, this terrible repository should be archived. No offense.

TensorFlow or Keras is too far away to compete to PyTorch. If you like to use Keras (or KerasCV or KerasNLP), then use its version 3 with PyTorch backend. But it is wiser to move out to PyTorch.

I don’t get why it is wiser to move to PyT? I clearly see it as a very terrible framework - manual configuration of memory spaces in CPU/GPU, manual distribution, with no parallel feature JAX in comparison.

I do know they have a monopoly in research due to - FAIR and HuggingFace archives and their readymade models that are more easily accessible in different formats to researchers that isn’t the case with TF. Unfortunately, TF Model Garden isn’t even a parallel to HF now.

If you have trouble with PyTorch for its low-level ops, then you can still use PyTorch Lightning, or fastai.

I don’t get why it is wiser to move to PyT?

It is not that PyTorch is greater than TensorFlow. Based on the current open source implementation, most of the valuable codebases are written and strongly support PyTorch, i.e. Transformer, Diffusers and many more. FYI, Google itself uses PyTorch for their research work. Check this.

TF Model Garden isn’t even a parallel to HF now.

I have reason said earlier that this TF Model Garden should be archived. Check this topic.

Interesting. I wasn’t aware of that one. But if you notice, PyT uses TF Backend to interact with TPUs (that are Google’s dev in the end).


  1. I’m not into this PyT vs TF thing, but I need to understand, why the recent trends of researchers get overwhelmingly into PyT? What made all of them move into PyT? Is that their docs, community support, maintenance, funds, or what?

  2. The repo that you showed (Google’s Prompt-to-prompt), does that use PyT or is that built upon the repos and papers that have used PyT? (I guess later one, because diffusion models are already built with PyT in the end).

  1. As for the Model Garden, it seems junk, issues, and PRs aren’t being responded and the participation is slow. Is there any TF alternative for the same for now?

For now, I hope Google doesn’t stop and archive TF and keep it for a while, I suppose it is still better than PyT in multiple aspects, but the participation has gone drastically down past few years.

  1. Btw, what other researchers have used TF in the past, what is their opinion now after all that?

I was looking for porting few of models into TF, but after all of these, even my minds have changed.

  1. Debugging is easier in PyTroch than TensorFlow. Thrid party implementaiton are vastly available in PyTorch, i.e. timm, transformers, many more. Those makes it a first class choice among researcher, engineer, etc.
  2. Diffusers is built with PyTorch. When google itself use this, that makes a big difference. There are other example I could share.
  3. Keras is always the first class API for TensorFlow. The alternative would be KerasCV and KerasNLP. ( But, I don’t see any effort from google to make the development progress faster. HuggingFace hires dedicated researcher and enigeer to make their repo up to the mark. I don’t believe there is a budget issue, here google we are talking. One thing that surprised me, few month ago some of the core KerasCV member left the company. Thus the developemnt progress of KerasCV is slower than ever. So, I wouldn’t mind to see TF Model Garden is archived and reinforcement is applied to KerasCV and KerasNLP.)
  4. don’t know, don’t care.

François created Keras, a great tools. If you like to use it, go for it. But note, in your workplace, among your colleague, you have to pick (model, library, etc) which are available, strongly supported, and works as expected. However, the API of TF and PT are almost same, so if you are familliar with one, you should be okay with another. If you are open for contributon, check this open issue in KerasCV.

Contributions to Keras do makes sense to me, primarily it have three advantages:

  1. Easy portability to TF
  2. Easy portability to PyT
  3. Easy portability to JAX

I believe anything written using Keras can be run using any three of DL engines above. (Right?) that makes it a good choice to work on Keras for now.

Is there any Keras alternative to TF Garden? If possible, I’ll start right away porting my models to it.

Thank you

Keras 3 code should run with all backend (tensorflow, torch, and jax). See this available official guide. Developer guides .

TF Garden is mainly a model zoo of different domains, i.e. cv, nlp etc. And so, keras-cv, and keras-nlp are the first class alternative here. Migrating tf.keras model to keras 3 (to keras-cv or keras-nlp) would be great contribution (a relevant ticket). Also, take a look at this migration guide Migrating Keras 2 code to multi-backend Keras 3

I’m confused about that stuff - like LoRA (fine-tuning), RAG, etc. that don’t come across both - NLP and CV. Where they are supposed to go? Does Keras have any dedicated repo for those too?

Or one would need to boil it down to somewhere either NLP or CV somehow and add them? like Reinforcement Learning things?

HI @Abhas_Kumar,

We would really appreciate any contribution to official Model Garden repository, for custom models architecture design we have setup in place. You can always use that and contribute to new models.
The boiler plate required for any new model: (vision): starter. Please let us know any other suggestions that requires for improvement of the repository, we can work along to improve the ecosystem of Tensorflow Model Garden.


Hi @innat,

We’re particularly interested in your thoughts on the following:

  • Potential areas for improvement within the repository. Perhaps you’ve encountered specific sections that could be better documented or features that could be enhanced.
  • Impact of migrating to Keras 3. We understand that Keras 3 introduces some code changes, especially regarding optimizers. We’d appreciate your perspective on how this migration might affect the overall performance and maintainability of the repository code, particularly considering the highly customizable architectures for both vision and NLP tasks.
  • Support for Yolo-V7. We’re excited to have added support for Yolo-V7, but we’re always open to suggestions for improvement or additional features that would be beneficial to the community.

If you have any suggestions, feedback, or recommendations, please don’t hesitate to share them with us. We’re eager to learn from your expertise and collaborate to make the TensorFlow Models repository even better.

Thank you for your time and consideration.

Things that doesn’t fit either KerasCV or NLP, and can be considered as general stuff are placed in core Keras. Such as, LoRA, various metrics or loss method that can be used in any domain (cv, nlp, etc). Further, components that are multi-modal, then based on the end goal of that components are added to the proper codebases. For example,

1 Like


Potential areas for improvement within the repository

Sorry, I don’t see any hope for the TF Model Garden repository. IMHO, the ml practitioners should not be encouraged to use it, otherwise this type of user feedback will be continued. As said, it should be archived, same as TensorFlow Addons.

Impact of migrating to Keras 3

I am not sure what is your actual concern here. TF-model-garden has many tf.keras components, I was referring those compoents to migrate to relevant domain scpecific codebases, i.e. keras-cv, keras-nlp.

Support for Yolo-V7

What is your point? KerasCV offers Yolo-V8.

Thank you for the response to have me start contributing to the official Model Garden Repository.
I’m specifically interested in knowing about these general points.

  1. What are the scope and plans for updating the repository in TF Hub? Currently, the latest release seems dated (v2.16): 16th Nov 23, the previous one on 17th Oct 23, and so on. There is a sudden halt in the releases of the Model Garden and since then no new release in 2024: Releases · tensorflow/models · GitHub . Is TensorFlow seeking TF Hub to continue?

  2. Any model port on Keras 3, automatically gets TF code running. So, if I port a model to Keras 3, is it okay to push the same code into TF Model Garden, with little modifications, following the contribution guidelines?

  3. What’s the roadmap for 2024 for TensorFlow Hub and the plans to get models ported into TensorFlow Model Garden?

Thank you.

Hi @Abhas_Kumar,

  1. TF-Hub site itself has been deprecated and the models now exist in kaggle. Basically moved to kaggle. They will be a release in upcoming month.
  2. As of now code runs using Keras (2.15) from tf-keras repo, we are trying to migrate one model and check if there is possibility to migrate. Because there were scenarios where some of the functions have been deprecated in keras-3. So we have to find alternatives in Keras-3 and any possible weights loading problem.
  3. All the models in Tensorflow Model Garden will be available in Kaggle Models. If any model not present in Kaggle will also available in upcoming days.

Thanks & Regards.

Hi @innat,

Migrating to Keras-CV and Keras-NLP is a good way, I can say that. I was just mentioning that YOLO is supported, I know that YOLO-v8 is present in Keras. We will try to improve the repository for users, Thanks for your view.

Thank you.

Point (2) seems a bit problem, as there would be a lot of changes that would be needed to make. Is there any explicit list of functions that are depreciated or changed in the latest Keras 3? That could be helpful to look into it.

If TensorFlow has future support for Keras 3, then it could make the solution easier, as Keras 3 models could easily get transitioned into TF Models.

This is the documentation to migrate from Keras 2 to Keras 3 and you can find some of the functions which are deprecated.

From 2.16 tf.keras will point to Keras 3 and Keras 2 will be supported under tf-keras repo.

Legacy optimizers are effected and have been from Keras 3 so the code that is using legacy optimizers and api functions mentioned in the guide have to be changed to new Keras 3 api’s and check all the benchmarks as well.

What specific changes have affected legacy optimizers??!!

I see them using the same syntax. Do benchmarks vary with Keras 3 optimizers? Or there is any change in formula?

  1. if you check here in Model Garden we have dependency on legacy optimizers, so first we have to move those optimizers to official new optimizers.
  2. All the models are written using gradient_tape method so in that sub classing of keras3 api we have some functions deprecated. I will provide the screenshot of that as well. So even after migrating we have to check the accuracies of the previous versions with new versions.