[To TensorFlow Team] Feedback on TensorFlow Object Detection API - Kaggle

TensorFlow - Help Protect the Great Barrier Reef

The recent kaggle competition is finished and most of the top solutions are in PyTorch.

Here is the concerned post from a TF user, below. Bringing the post here to reach the real man (TF teams) to give some feedback for end-users.

In addition to this event, a few months ago there was an instance of segmentation competition in kaggle, and sadly, not a single solution or discussions were done in TF (AFAIK).

( PS. Some effective steps need to be taken, IMHO. I’m optimistic for Keras-CV but I also understand that it would be not as easy as it looks. )

4 Likes

Thank you for sharing this. It is very interesting especially as it is coming from a very popular (and Google owned) platform like Kaggle.

IMHO It seems also to be partially connected to our thread at:
https://tensorflow-prod.ospodiscourse.com/t/keras-cv-keras-nlp-keras-applications-models-garden/7276

And to the trends mentioned in:
https://tensorflow-prod.ospodiscourse.com/t/which-models-would-you-like-to-see-on-tensorflow-hub/111/14

/cc @thea @Joana

2 Likes

See also my comment at:

1 Like

Additional Query


At the end of the TensorFlow - Help Protect the Great Barrier Reef competition, kaggle staff made a concise list of the solution summary, find HERE.

The common framework is PyTorch and the common model is YOLO-V5, which doesn’t have the published paper (AFAIK). In tensorflow/model, so far YOLO-V4 is there with NO SUPPORT.

DISCLAIMER: this YOLO implementation is still under development. No support will be provided during the development phase.

As the competition ends, the solution is YOLO-V5 with PyTorch framework, I’m wondering how it’s gonna be used in Google research for the COTS project with CSIRO?

To scale up video-based surveying systems, Australia’s national science agency, CSIRO has teamed up with Google to develop innovative machine learning technology that can analyze large image datasets accurately, efficiently, and in near real-time.

1 Like

We already had a similar feedback:

I share some ideas:

  • more third party papers refence implementations in TF. We need to attract third party papers authors.

  • Find a way to expose in TF many “non third party” research that currently is done in JAX. I like framework diversity/competition but we have one more barrier to have these works available in TF

  • a clear collection of reusable model components, as a library, to incentivate and speedup community contribution without reinventing the wheel in multiple repositories.

  • scale the community contribution promoting long term/stable contributors for Codeownership and subcomponets reviews

  • Incentivate TF datasets collection from dataset paper authors and with Kaggle competitions.
    We don’t need to write tedius data feeding/processing scripts every time.

  • More finetunable TFHUB models

  • Extra: GitHub Action Jobs on GKE or on any other Google Cloud resource to run training job on community models contributions approved by maintainers.

2 Likes

Very thorough.

1 Like

There are some interesting (but known) points in this report.

But I found that some important stats are missing.
As they are both OSS projects/ecosystems, how much the external contributors (not Meta/Google) are contrbuting to these repositories?

I think that in the long run the diversity and inclusivity in the contribution could really help the ecosystem sustainability, vibrancy, health and It could also help to minimize some bias on where to invest and schedule the always “not infinite” resources.

Building this It is much harder than just relasing a set of libraries.

2 Likes

Thanks @innat for the feedback!
This is very important and I agree that it can be improved!

I’ve watched a demonstration promo video from Google regarding the Great Barrier Reef, link below.

In that video, Megha Malpani, a product manager at Google AL/ML states about the Kaggle competition for this project. And also states that TensorFlow 2. Model Garden Library was their Foundation of codebases (video: 2.35 seconds)!

Now, this shocked me like a hell. It was the YOLO-V5 model written in PyTorch that produced high-performing detection results. At the beginning of the competition, a starter with TensorFlow Model Garden was provided but not only did it give a low performance, people rejected that like nothing.

In this promo video, it states that (video: 2.18 seconds), they used kaggle competition results to glean insights into what did and didn’t work for a particular task. The fun fact is, that the results from the kaggle competition are only YOLO-V5, which is currently dominating several object detection kaggle competitions.

I am wondering how the TensorFlow team reforms their model garden codebases to solve this task for CSIRO. As far as I know, the YOLO-V5 model won’t be included in Model Garden any near soon. Or what?


update

An example on the way.