Using masking layer and variable length sequence input

Hi, I got LSTM model which uses tf.Dataset.bucket_by_sequence_length method to generate variable length multivariate timeseries data with 18 features. Timeseries within each batch are padded with -999.9 to bucket specific length. Example input shape is (16, 200, 18), however middle value changes so I set masking layer input_shape to (None,18). With that I got this kind of error:

ValueError: weights can not be broadcast to values. values.rank=2. weights.rank=3. values.shape=(None, None). weights.shape=(None, None, 1). Received weights=Tensor(“ExpandDims_2:0”, shape=(None, None, 1), dtype=float32), values=Tensor(“remove_squeezable_dimensions/Squeeze:0”, shape=(None, None), dtype=float32)

Interestingly model will successfully fit when no masking layer is provided with input_shape=(None, 18)
Here is how I got this done right now:

    inputs = layers.Input(shape=(None,18))
    x = inputs
    x = layers.Masking(mask_value = -999.0, input_shape = (None,18))(x)
    x = layers.LSTM(30, return_sequences = True)(x)
    outputs = layers.Dense(1, activation="sigmoid")(x)
    model =  keras.Model(inputs, outputs)

I would be grateful for any suggestions.

@PohnJaul2,

Welcome to the Tensorflow Forum!

Could you please provide a code snippet or colab notebook to debug?

Thank you!