Which models would you like to see on TensorFlow Hub?

I would like to go back to Image Classification. The following two would be my top priority:

  • RegNet
  • Vision Transformers (ViT) in TensorFlow
3 Likes

those are good suggestions!
thanks for contributing!

1 Like

Those are nice too! I want to play with ViT too!

1 Like

It could be nice to have Google’s Pay Attention to MLPs.

2 Likes

Thanks @Bhack , that’s a good idea!

1 Like

As a side node and more in general, I think that we could find a way to better interact with paperswithcode with TFHUB and the Model Garden SIG now that it is integrated with ArXiv

2 Likes

I would like to see:

  • even more speeded-up variants of existing (mostly mobile/embedded focused) models: quantized variants down to 8 bit integers
  • Everything having support for 16 bit float32 / amp
  • yolo
3 Likes

That’s a good idea @Bhack , for Model Garden I can take a look and see if we can improve some processes!

1 Like

Thanks for the feedback @ntakouris

for the quantized versions, that’s something we can always tell the publishers that developers want it!

Yolo specifically was mentioned some days ago in another thread and TFHub is open for non-Googlers to publish their models. So if the Yolo maintainers want to publish their models, that would be awesome!

1 Like

@lgusm It would be great.

Take a look at some recent stats:

2 Likes

I hope that in addition to the trained model, we can also rebuild the model structure implemented with TensorFlow 2.x and use the corresponding dataset for learning and experiencing the training process.

1 Like

Hi Nan Zheng,

For the full code, you can usually find it on TensorFlow Model Garden.

The corresponding dataset might be much harder since all of them have a specific license and hosting them is much harder than a technical issue. On TFHub and Model Garden the dataset the model was trained on is mentioned in the documentation but not hosted.

2 Likes

Thanks for sharing @Bhack

2 Likes

Probably we can improve the collaboration and integration with TFDS, when possibile, to have an improved ecosystem experience.

2 Likes

Yes, that would be a good thing to do.
This kind of feedback really help the team to prioritize next steps so please keep them coming!

3 Likes

Thanks for your reply @lgusm ,

Unfortunately not all models in TensorFlow Hub can I find the source code. For example, I’m interested in " handskeleton" model, what only I know is the structure is SSD, but I can’t find the TensorFlow code in TensorFlow Model Garden, will someone post it in there in the future?

1 Like

Yes It would be really nice to have a model card for every new model in TFHub.

I think that for the HandSkelethon specific case is

Model card:

2 Likes

Yes @Sayak_Paul for image classification, with inputs using 224 width, 224 height, RGB or [1, 224, 224, 3] and quantized models as well.

I was looking at tf-HUB yesterday after a years break and there are now so many models that it is very confusing which ones are good images starters, which ones do what. I know that when you click on a model it gives you a lot of information, but has anyone done a review or summary page that says: start here for image classification, start here for sounds etc, etc.

My interest area is TensorflowJS layers models and things seem to have changed a bit. I will make a new thread about it called can-we-talk-about-tf-hub-downloading-to-tensorflowjs

There is a bit of a trend to make specific npm’s from TF-HUB models for transfer learning using TensorflowJS, but that doesn’t help people to make there own models.

2 Likes

Thanks for the feedback Jeremy!
I understand it perfectly and that’s perfect! That really helps move the product forward!!

TFHub has a lot of models these days. To have a easy start, as of now, I’d recommend you take a look on out tutorials: www.tensorflow.org/hub

For image specifically, there’s one for image classification and another for transfer learning that have a lot of insights.

It doesn’t address your point directly but can help you for now.

1 Like

Related to our thread, papers with code Is expanding:

1 Like