EfficientNetB0 compression

I’m not sure if this is the best place for this and to date there has been little interest in this work but passing along in case others find any elements of it useful. Bird Detection TinyML - Cranberry Grape | Cosmic Bee | Tim Lovett

I’ve left some notebooks with my process and showing the model as it was developed from the original 224x224 524 outputs 4,491,895 parameters at 95.31% accuracy swish full sized EfficientNetB0 to the ending 96x96 relu6 411 (at one point in the process I limited the outputs further to the birdfeeder only birds) with 190,770 parameters and 83.49% accuracy (with 82% int8 quantized tflite accuracy).

I’ve been working on compressing EfficientNetB0 for the past few months and I’ve seen decent results from my work. I was able to take the trained model, downsize the input size while maintaining most of the accuracy, and then converted it to relu, then selectively removed layers retraining other layers to regain the lost accuracy, and then converted it over to relu6 as I found the accuracy to be too heavily influenced by the representative dataset during quantization with the larger spread of values from the relu activation (something noted in the EfficientNetLite model announcement).

It appears to work for me and random bird images I’ve tried have worked. The fact the tflite model is showing 82% accuracy on the (kept pure) test image set makes me think the process has no flaws but given the low size of the model and the general lack of interest I feel like I’m missing some major red flag.

Anyhow I would appreciate if someone could take a look and call out some deficiency with the approach if there is one. If not any validation that this approach has promise would be welcome (I left a post a few months ago in show and tell showing the beginnings of this but got no feedback).