Integer Quantization of LSTM model

I have a sequential keras model using dense and lstm layers. After training the model, I saved in .h5 format. I am trying to convert this model to a tensorflow lite model with 8-bit integer quantization to run it on the Coral Dev board. I can perform the conversion to a lite model just fine, but when i try to quantize i get the “ValueError: Failed to parse the model: Only models with a single subgraph are supported, model had 3 subgraphs.”.

System Information:
Ryzen 5 3600
AMD 5700xt
Tensorflow version: TF nightly

Model design:

self.model = tf.keras.Sequential([
InputLayer(input_shape=(WINDOW_SIZE // WINDOW_STEP, 1), name=‘input’),
Dense(DENSE_LAYERS, activation=‘relu’),
LSTM(LSTM_LAYERS),
Dense(len(CLASSIFICATION.keys()), activation=‘softmax’, name=‘output’)
])
self.model.compile(optimizer=‘adam’,
loss=‘categorical_crossentropy’,
metrics=METRICS)

To reproduce error:
Clone https://github.com/jboothby/LSTM_Error_Report and run convert_to_lite.py

I used the example code from: Целочисленное квантование после обучения  |  TensorFlow Lite for integer-only quantization . My representative data is include in the .csv file in the repository.

The error seems to be coming from the representative dataset line. If i change the current code
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = tf.lite.RepresentativeDataset(representative_data_gen)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8

tflite_model = converter.convert()

to

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]

tflite_model = converter.convert()

Then it executes fine, but doesn’t do full integer quantization.

This is my first time posting a help question on this forum, so please let me know what else I can add to clarify.

This really seems:

I suggest you to subscribe to:

[RNN] Rolled SimpleRNN and GRU TFLite conversion · Issue #50226 · tensorflow/tensorflow · GitHub

Thank you for your responses. If I’m understanding this correctly, there just isn’t support for quantizing LSTM models right now. I’ll subscribe to this and watch for developments.