Unable to get good accuracy in sequence classification


I am new to machine learning and I am working on a problem of sequence classification.
The data in the dataset consists of sequences of shape(20,9)
9 features of length 20.

I have tried below model but I have failed to get good accuracy.

f = 32
model = tf.keras.Sequential()
model.add(tf.keras.layers.LSTM(units = 4*f, activation='tanh', return_sequences = True, input_shape = (window_length, 9)))
model.add(tf.keras.layers.LSTM(units = 4*f, activation='tanh', return_sequences = True))
model.add(tf.keras.layers.LSTM(units = 4*f, activation='tanh'))
model.add(tf.keras.layers.Dense(units = 256, activation = 'tanh'))
model.add(tf.keras.layers.Dense(units = 64, activation = 'tanh'))
model.add(tf.keras.layers.Dense(units = 1, activation = 'sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy', TP, TN, FP, FN])
history = model.fit(X_train, y_train, epochs=256, batch_size=256, callbacks=[WeightsSaver(model,1)], validation_data=(X_test, y_test))

I have also tried more deep network, CNN_LSTMs and CNN_LSTM with dense layers at the end.

I am not getting good confustion matrix. It is mostly giving me false negatives even on train set.

I have also tried to balance the data using SMOTE and SVM_SMOTE still it is giving me low accuracy.
The problems I am facing:

Without resampling: majority class prevails and very less minority class.
With resampling: I am getting many false positives on train and test data.

I have pasted the data on below kaggle page.

Can someone expert in this field help to share experiences and possible solution or point out mistakes that I could be making.

  1. Other activation functions (relu, swish) have proven effective.
  2. Is return_sequences=True necessary?