Higher classification accuracy for 1DCNN when adding one extra dimension to 1D time series inputs

I have a set of time series (accelerometer measurements) with 1800x2000 dimensions (number of measurements x length of measurements). I want to do a classification task with five classes using 1DCNN in Keras for this data. The typical input shape for the Conv1D layer is [batch_size, timesteps, channel_number]. This input shape for my case would be input_shape=[1800,2000,1]. I get poor classification results using this input shape, and overfitting occurs in most trials. But when I add an extra dimension to my inputs (in front of timestep: [1800,1,2000,1]), I’m getting higher classification accuracy, and overfitting does not occur.

Does anyone know what can be the reason? And how is it possible to feed 4D input to Conv1D? Thanks for any help.

Here is my 1DCNN model:

myInput = layers.Input(shape=(1,2000,1))

X = layers.Conv1D(16, 9, activation='relu', padding='same', strides=2)(myInput)

X = layers.Conv1D(32, 3, activation='relu', padding='same', strides=2)(X)

X = layers.Flatten()(X)
X = layers.Dense(100, activation='relu')(X)
X = layers.Dense(50, activation='relu')(X)
X = layers.Dense(20, activation='relu')(X)

out_layer = layers.Dense(5, activation='softmax')(X)

myModel = Model(myInput, out_layer)

Hi @Al_Di

Welcome to the TensorFlow Forum!

Conv1D accepts the input in 3D dimension with shape: batch_shape + (time_steps, input_dim). But as you are using timeseries dataset, providing extra dimension will be considered as extended batch shape ([1800,1]) which is used when working with multi-level data. The extended_batch_shape represent the shape of a batch of batches of data and can be used to access individual batches of data within a batch of batches which improves the model training and classification accuracy for Timeseries dataset.

Please refer to this example for more detail.