Size reduction for TFlite Models

I have trained efficientlite0-4 architecture for object detection task having the size range from 4.5-20.8 mb. Is there any way or any techniques to make this architecture more light weight with some accuracy loss ? e.g making efficientlite0 model size from 4.5 mb to <2 mb.
As I am deploying this model on Khadas Vim3 board having 2gb ram and not targeting to use NPU.
Thanks in Advanced.

Hi @Chetan_Deshmukh, Could you please confirm whether you have applied quantization to your model?

If not you can apply Quantization to reduce the size of the model. Quantization works by reducing the precision of the numbers used to represent a model’s parameters, which by default are 32-bit floating point numbers. This results in a smaller model size and faster computation. Please refer to this document for knowing the percentage of size reduction by using different quantization techniques. Thank You.

@Kiran_Sai_Ramineni I have performed Quantization process on the models that we have. Please find the model below.
Can you tell me how to reduce the size of such models to make it more light weight?
Thank You.

Hi @Chetan_Deshmukh, Could you please let us know the actual size of the model(before converting to tflite). Thank You.

@Kiran_Sai_Ramineni I obtained this model using TFLite model maker with model.export method. So I don’t have any info of model size before converting. With model.export I also got a .pb file which was 14.4 mb.
Thank You.