Quantization Aware Training for EfficientDet Lite

Hello,

I am using tflite_model_maker to train the object detector on custom dataset.

from tflite_model_maker import object_detector
import tensorflow as tf
from tflite_model_maker.config import ExportFormat
assert tf.__version__.startswith('2')

train_data, validation_data, test_data = object_detector.DataLoader.from_csv('data.csv')
spec = object_detector.EfficientDetSpec(
  strategy = "gpus",
  tflite_max_detections = 100,
  model_name='efficientdet-lite2', 
  uri='https://tfhub.dev/tensorflow/efficientdet/lite2/feature-vector/1', 
  hparams={
    'optimizer' : 'adam',
    'learning_rate' : 0.01,
    'lr_warmup_init': 0.0008,
    'max_instances_per_image': 301,
    'autoaugment_policy': 'v0'},
  epochs = 20)

model = object_detector.create(train_data, model_spec=spec, batch_size=64, train_whole_model=True, validation_data=validation_data)
 
model.export(export_dir='.', tflite_filename = "PTQ_model.tflite")
model.export(export_dir='.', export_format=[ExportFormat.SAVED_MODEL])

Can you please guide me on how to apply Quantization Aware training to finetune the trained model ?

Thank you

Hi @ilyas_aroui, You can apply quantization to your trained model by

import tensorflow_model_optimization as tfmot 
quantize_model = tfmot.quantization.keras.quantize_model
# q_aware stands for  quantization aware.
q_aware_model = quantize_model(model)

For more details please refer to this document. Thank You.