Mobilenet v2 pretrained weights with lowered depth multipliers

I’m training ssd mobilenet v2 on custom data

As per this blog post

I want to use a lower depth multiplier to decrease the model size

When I change the depth multiplier in the pipeline config, the pretrained weigths are not compatible.

Are there any pretrained weigths available for tf2 with different depth and input size
Or is it possible to load the checkpoints provided here

and export them in a format which can be used to train with

Hi @Patrick_Conway

How about SSD MobileNet V2 for object detection : TensorFlow Hub

trained on the COCO2017 dataset. For example:


Alternatively, here are the MobileNet V2s for TF2 (various tasks) with some with different depth multipliers:

Thanks for the reply, I am trying to recreate coco ssd from tensorflow js on a custom dataset.
When training I end up with a model that is around 10Mb , but the coco ssd pretrained model when using lite_mobilenet_v2 is around 1Mb

I got a lot of browser crashes and ‘aw snap something went wrong’ errors when running my model trained with ssd_mobilenet_v2 and tf2.

So my basic goal is to reduce the model size and apply the same optimizations to improve the performance for browser execution which have been applied in the coco ssd implementation.

I believe the model size can be reduced by reducing the depth multiplier and applying uint8 quantization.

Awesome. Not an expert here but off the top of my head:

As well as:

This post may be a bit dated but useful:

Another doc you may find useful - retraining with a custom dataset (scroll down to the Download the headless model section of the tutorial): Transfer learning with TensorFlow Hub  |  TensorFlow Core

thanks, I will give it a try