Handling overfiting with mask layers

How can I handle overfitting with mask layers? Because with mask layer, it has high chances of zeros present in input data

Did you try leaving a portion of your data (10% to 15%) for validation purpose? You can define early stopping callback object (read here: tf.keras.callbacks.EarlyStopping  |  TensorFlow Core v2.5.1) and use it in training. Callback keeps track of the model accuracy on validation data while training it on the train data. If no progress is made for several epochs it stops training and restores the best weights. It should work regardless of the model architecture or task.

For validation i have 10% of data and i am doing early stop. But with early stop i can’t get good accuracy. Specifically with mask layers its hard to get good accuracy because of zero padding for different time step of data

it has high chances of zeros present in input data.

Ideally the result shouldn’t be dependent on the length of the padding.

In some cases tf.ragged can make this sort of code easier to write.

It may not be the masking causing the poor performance.