How to modify the following code to work for any number of features instead of only 80 features?

I have the following code which runs ok when the dataset has 80 columns/features but if I implement it on a different feature size, it fails. For example, I received the following error when the csv file had 50 columns: Input 0 of layer “sequential_1” is incompatible with the layer: expected shape=(None, 50, 1), found shape=(None, 80, 1)

# cnn autoencoder architecture
no_of_features = train_X.shape[1]

class AE_L2(Model):
  def __init__(self, latent_dim):
    super(AE_L2, self).__init__()
    self.latent_dim = latent_dim
    self.encoder = tf.keras.Sequential(
            Conv1D(16, 4, 2, 'same'), BatchNormalization(), LeakyReLU(),
            Conv1D(32, 4, 2, 'same'), BatchNormalization(), LeakyReLU(),
            Conv1D(64, 4, 2, 'same'), BatchNormalization(), LeakyReLU(),
    self.decoder = tf.keras.Sequential(
            Dense(640), Reshape((10,64)), BatchNormalization(), ReLU(), 
            Conv1DTranspose(32, 4, 2, 'same'), BatchNormalization(), ReLU(),
            Conv1DTranspose(16, 4, 2, 'same'), BatchNormalization(), LeakyReLU(),
            Conv1DTranspose(1, 4, 2, 'same', activation='sigmoid'),
  def call(self, x):
    encoded = self.encoder(x)
    decoded = self.decoder(encoded)
    return decoded

adam = Adam(0.01)
reduce_lr = ReduceLROnPlateau()
early_stopping = EarlyStopping(patience=2)
model_checkpoint = ModelCheckpoint("./models_AE_L2/checkpoint", save_weights_only=True, save_best_only=True)

def mse(y_true, y_pred):
    return tf.reduce_mean(tf.square(y_true-y_pred))

autoencoder = AE_L2(no_of_features)
autoencoder.compile(optimizer=adam, loss=mse), no_of_features, 1))

Any help or explanation on why it always expects 80 features and how I could implement it to work with any number of features?
![Screenshot 2023-05-23 143432|541x500](upload://9Rs8cRJJSExnaqhY9PyM7aODgoj.png)

I have also attached part of the model summary here, how can I compute the parameter values please? I have tried using ((kernel_size * stride) +1 ) filter but end up with a different set of values.

The key to modifying the code is to use a loop. This will allow you to iterate through all of the features, no matter how many there are. To do this, you can replace the existing hardcoded 80 with a variable that will be used as the upper limit in the loop. Then, within the loop, you can use that variable to access each feature one by one.
For example, if you were using a for-loop, it might look something like this:
for i in range(0, number_of_features):
feature[i] = some_function(feature[i])
This would allow you to apply whatever function you need to each of your features without having to manually type out each one separately.