After post-training quantization, is it possible to change the dense-layer weights in TF Lite models?
An example of what I would like to do:
interpreter = tf.lite.Interpreter(model_path=Flags.tfl_file_name)
interpreter.allocate_tensors()
tensor_details = interpreter.get_tensor_details()
weight_idx = 0
for tensor in tensor_details:
if tensor['name'] == 'sequential/dense/MatMul':
weight_shape = tensor['shape']
weight_idx = tensor['index']
weight = interpreter.get_tensor(weight_idx)
weight = np.zeros(weight_shape,dtype='int8')
print(weight)
interpreter.set_tensor(weight_idx, weight)
This feature is needed for my hardware-accelerated Fully_Connected kernel.