Conversion of LSTM model to tflite

I am trying to convert a simple LSTM model to tflite. But conversion is asking for the flex delegate.

Following is the architecture :
model_input = tf.keras.Input(shape=(124, 129), name=‘input’)
LSTM_out = tf.keras.layers.LSTM(units=256)(model_input)
dense_1 = tf.keras.layers.Dense(128, activation=‘tanh’)(LSTM_out)
dense_2 = tf.keras.layers.Dense(64, activation=‘tanh’)(dense_1)
dense_3 = tf.keras.layers.Dense(32, activation=‘tanh’)(dense_2)
model_output = tf.keras.layers.Dense(num_labels)(dense_3)
model = tf.keras.Model([model_input], [model_output])

Converter config :
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
converter._experimental_lower_tensor_list_ops = False
converter.conversion_print_before_pass = “all”
converter.inference_input_type = tf.float32
converter.inference_output_type = tf.float32

I have another set of models where the conversion worked perfectly fine for an almost similar architecture without flex delegate.

As flex delegation is not an option in tflite-micro, this is a great issue for me.

Please let me know how I can get around it or kindly point me in the right direction.

Thanks in advance :slightly_smiling_face:

1 Like

Please go through LSTM Fusion Code lab to convert LSTM model without select ops may help you. Thank you!

2 Likes

Thanks a lot. That helped.

The LSTM Fusion Code Lab helped, and I managed to convert my model into TFLite without select ops. However, the TFLITE model performance accuracy degraded significantly after applying this method, as opposed to my original select ops conversion. I suspect the codes here before conversion is doing something to my trained model:

run_model = tf.function(lambda x: model(x))

BATCH_SIZE = 1

STEPS = 28

INPUT_SIZE = 28

concrete_func = run_model.get_concrete_function(

tf.TensorSpec([BATCH_SIZE, STEPS, INPUT_SIZE], model.inputs[0].dtype))

MODEL_DIR = “keras_lstm”

model.save(MODEL_DIR, save_format=“tf”, signatures=concrete_func)

For your further information, my model is a CNN-LSTM model, which goes something like this:

pretrained_model = MobileNetV2(
weights=‘imagenet’,
include_top=False,
input_shape=(112,112,3)
)

mobilenet_model = Sequential([
pretrained_model,
GlobalAveragePooling2D(),
Dense(256,activation=‘relu’)
])

num_timesteps = 10
img_width = 112
img_height = 112
num_channels = 3

lstm_model = tf.keras.Sequential([
TimeDistributed(mobilenet_model, input_shape=(num_timesteps, img_width, img_height, num_channels)),
Reshape((num_timesteps, 256)),
LSTM(128),
Dense(64,activation=‘tanh’),
Dense(5, activation=‘softmax’)
])

Would this more complex model with CNN cause issues with the fusion conversion?

Thank you very much in advance! :slight_smile: Please guide me so that I can finally convert CNN-LSTM model to TFLite without the select ops, which is taking up too much memory for my Android application.