Looks like there are a lot of threads and comments from people who have been struggling to use tflite-model-maker with the update to Python 3.10 in Colab both here in the forum and on the Github bug report here.
For people who are using the tflite-model-maker package regularly can we get some clarification whether this package is being deprecated in favour of mediapipe-model-maker or whether both will be actively run?
If tflite-model-maker is being deprecated can we have some clarification as to whether EfficientDet will be supported in mediapipe-model-maker, it currently looks like it only supports transfer learning with MobileNet-V2 and MobileNet-MultiHW-AVG.
Happy to transition to mediapipe-model-maker for transfer learning of EfficientDet models if that is the intended roadmap, just looking for clarification on where things are heading so we can prepare from this end. Also super happy to stick with tflite-model-maker
Definitely appreciate people are busy working on projects/bug fixing etc but maybe a question for @khanhlvg or @Yuqi_Li or @chunduriv
As per my knowledge, both packages will be actively available.
@Lu_Wang - Could you please share some pointers on the above context?
Thanks for the reply @chunduriv. Sounds good that both are potentially being supported into the future, although I do know Tensorflow Addons has moved into maintenance mode for discontinuation in May 2024 and tflite_model_maker does rely upon parts of that for functioning.
With the hope of transitioning over to mediapipe_model_maker for EfficientDet (Object Detection) model making into the future I’ve been poking through the git and might just list some ramblings for people who stumble across this post in the hope of using mediapipe_model_maker to train their EfficientDet models.
Right now you can train object detector models using MobileNet V2 models successfully with the mediapipe model maker in Colab, works really nicely. So if MobileNet models are what you’re after you’re good to go.
I can see from this git feature request/issue that teams are busy working on multiple fronts and it may be some time before EfficientDet is added to the supported models list - busy times, which is fair enough.
Looks like to add EfficientDet models to the list of supported models changes need to be made on two fronts in the object_detector side of things:
In the model_maker/vision/object_detector/model.py a model constructor for the EfficientNet backbone needs to be added (the backbone of the EfficientDet models). Currently one is constructed for Retinanet (the backbone of the supported MobileNet models).
In the model_maker/vision/object_detector/model_spec.py a new SupportModels class needs to be made that allows the model maker to downloading the EfficientDet training checkpoint and input the input_image_shape for each different EfficientDet model looking to be used.
From looking around in the model_maker code we can see that EfficientNet is supported in the Image Classifier maker so, with no knowledge of the roadmap at all, it seems likely that EfficientDet would potentially be included in the supported models for the Object Detector into the future - it’s most likely just a matter of time.
All this is easier said in a post than done in reality, so I’m happy to just patiently wait it out and just register my interest and eagerness to have EfficientDet models added to the mediapipe_model_maker into the future.
Just a quick post for anyone who is interested - I’m definitely just a user and have no insight into the going-ons at Tensorflow/Google.
Always keen to hear from anyone who has any real insights into this.
@wwfisher thanks for those great questions!
MediaPipe Model Maker is evolved from TFLite Model Maker, and is the next generation on-device training tool we are working hard to build. Before MediaPipe Model Maker becomes fully mature, you can continue relying on TFLite Model Maker. We’ll make announcement when we feel confident about the migration.
We are aware of the issue using TFLite Model Maker in Colab due to Python 10. A new release is coming soon to fix the issue. Please stay tuned.
Between EfficientNet and MobileNet: MobileNet is the architecture we’ll focus on more going forward. We heard some user requests asking for feature parity in MediaPipe for EfficientNet, and we are evaluating quality of these two on various metrics. Will give a more clear guidance in this thread on which one to use once we have the answer.
Thanks @Lu_Wang for the reply, nice to hear about the smooth plans for migration as MediaPipe becomes more mature and also the new release of TFLite Model Maker with the Python colab fix.
In regard to focusing more on the MobileNet architecture going forward will that be across both Classification and Detection models?
I’m not too sure on where the classification users heads are at but I’d definitely like to put a vote in for support of parity for EfficientDet models in the detection world. Scalability from E0 through E7x provides for a great level of flexibility for different uses and devices for developers as we balance off speed vs accuracy (even if that is at expense of more intense training resources).
If any of the metrics around the decision making in regard to MobileNet v EfficientNet is able to be made available I’d definitely be keen to read.