Hi ,
When using CMSIS NN for float32 model, I cannot see any performance jump.
But while using the quantized model, the performance jump is around 4x.
I would like to understand why this is happening for float32 model ??
Hi ,
When using CMSIS NN for float32 model, I cannot see any performance jump.
But while using the quantized model, the performance jump is around 4x.
I would like to understand why this is happening for float32 model ??