Higher classification accuracy for 1DCNN when adding one extra dimension to 1D time series inputs

I have a set of time series (accelerometer measurements) with 1800x2000 dimensions (number of measurements x length of measurements). I want to do a classification task with five classes using 1DCNN in Keras for this data. The typical input shape for the Conv1D layer is [batch_size, timesteps, channel_number]. This input shape for my case would be input_shape=[1800,2000,1]. I get poor classification results using this input shape, and overfitting occurs in most trials. But when I add an extra dimension to my inputs (in front of timestep: [1800,1,2000,1]), I’m getting higher classification accuracy, and overfitting does not occur.

Does anyone know what can be the reason? And how is it possible to feed 4D input to Conv1D? Thanks for any help.

Here is my 1DCNN model:

myInput = layers.Input(shape=(1,2000,1))

X = layers.Conv1D(16, 9, activation='relu', padding='same', strides=2)(myInput)

X = layers.Conv1D(32, 3, activation='relu', padding='same', strides=2)(X)

X = layers.Flatten()(X)
X = layers.Dense(100, activation='relu')(X)
X = layers.Dense(50, activation='relu')(X)
X = layers.Dense(20, activation='relu')(X)

out_layer = layers.Dense(5, activation='softmax')(X)

myModel = Model(myInput, out_layer)