I’m trying to apply different precision for different layers- say CONV2D and fully_connected.
I’ve tried to quantize dense layers with int16 by modifying the “LastValuesQuantizer”, but I’m getting the following error on allocating tensors after converting:
File “/home/shivaubuntu/.local/lib/python3.8/site-packages/tensorflow/lite/python/interpreter.py”, line 513, in allocate_tensors
return self._interpreter.AllocateTensors()
RuntimeError: tensorflow/lite/kernels/fully_connected.cc:166 input->type != kTfLiteFloat32 (INT8 != FLOAT32)Node number 1 (FULLY_CONNECTED) failed to prepare.Failed to apply the default TensorFlow Lite delegate indexed at 0
colab link: Google Colab