LSTM - need your help with basic sequences

Hi guys!

I need your valuable help to understand better LSTM’s for what I think is a relatively simple sequence , the code below runs however I am not getting the expected results, I am suspecting the way to shape the data, or the sequence definition, could you please shed light ?

import tensorflow as tf
from tensorflow.keras.layers import Embedding, LSTM, Dense, Bidirectional
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
import numpy as np 
import matplotlib.pyplot as plt


#function to plot training
def plot_graphs(history, string):
  plt.plot(history.history[string])
  plt.xlabel("Epochs")
  plt.ylabel(string)
  plt.show()


#the data
# eleven samples, each sample has been divided in  5 sub-sequences of 3 elements each
# e.g.    1,2,3  is one part of the first line / sequence,   10,11,12 is the next 

# Expected behaviour:
#  input  50,51   output 52
#  input  61,62   output 70
#  input  2002,2003   output 2010

data = np.array([    
1,2,3,10,11,12,20,21,22,30,31,32,40,41,42
,101,102,103,110,111,112,120,121,122,130,131,132,140,141,142
,201,202,203,210,211,212,220,221,222,230,231,232,240,241,242
,301,302,303,310,311,312,320,321,322,330,331,332,340,341,342
,401,402,403,410,411,412,420,421,422,430,431,432,440,441,442
,501,502,503,510,511,512,520,521,522,530,531,532,540,541,542
,601,602,603,610,611,612,620,621,622,630,631,632,640,641,642
,701,702,703,710,711,712,720,721,722,730,731,732,740,741,742
,801,802,803,810,811,812,820,821,822,830,831,832,840,841,842
,901,902,903,910,911,912,920,921,922,930,931,932,940,941,942
,1001,1002,1003,1010,1011,1012,1020,1021,1022,1030,1031,1032,1040,1041,1042
])

#I am not sure if this is the right way to shape the data
data = data.reshape(11,5,3)

#print(data)

#slice the data , so the 3rd element of each subsequence of 3 elements is the label  and the first 2 are the input
#e.g.  1,2,3       1,2  is the input,  3 is the label



xs = data[:,:,:-1]
ys = data[:,:,-1:]

#print ('xs')
#print (xs)

#print ('ys')
#print (ys)



#define the model
lossf =  tf.keras.losses.MeanAbsoluteError()

model = Sequential()

#tried this but didn't make a difference for good
  #model.add(tf.keras.layers.BatchNormalization(input_shape=( 5,  2) ))
  #model.add(Bidirectional(LSTM(150, activation='relu')))

model.add(Bidirectional(LSTM(50, activation='relu' ), input_shape=( 5,  2)))
model.add(Dense(1))
adam = Adam(learning_rate=0.0001,  )
model.compile(loss=lossf, optimizer=adam, metrics=['accuracy'])

#fit
history = model.fit(xs, ys, epochs=120, 
                  verbose=1 , 
                  validation_split=0.1 , 
                  batch_size=5 
                  #,  shuffle=True
                  )

#plot
plot_graphs(history, 'accuracy')
plot_graphs(history, 'loss')
plot_graphs(history, 'val_accuracy')
plot_graphs(history, 'val_loss')

#try it
predicted =   model.predict( [[[50, 51]]], verbose=0)   # expected  52 
print ('Predicted value ' ,  predicted )

predicted =   model.predict( [[[61, 62]]], verbose=0)   # expected  70
print ('Predicted value ' ,  predicted )

predicted =   model.predict( [[[2002, 2003]]], verbose=0)   # expected  2010
print ('Predicted value ' ,  predicted )
1 Like

For series data like that, you could try to convert the series into windows. Feeding the whole series to the neural net is possible but won’t give accurate results.

Take a look at this.

1 Like

So @GeorgeMR look at it this way.

The way you are creating your sequence training data is incorrect for your expected output.

data.reshape(11,5,3)

What this does is equivalent to creating 11 batches with 5 sequences in each batch, each sequence of length 3. If you look at the individual sequence in your training data, you always have a sequence like the ones below:
[1,2,3]
[20,21,22]
[401,402,403] and so on.

You never encounter a sequence in your training data like the ones below:
[21,22,30]
[1022,1030,1031] and so on.

This is happening because you are splitting your sequence of (length 15) into 5 subsequences of (length 3) which does not represent the whole original (length 15) sequence. The smaller (length 3) is missing out to include the seasonality in the longer sequence.

You need to use the sliding window approach as mentioned by @Jean (Perkiraan deret waktu  |  TensorFlow Core) to preprocess your sequence and generate the training data. Hope this helps!

2 Likes

Thanks @Jean and @aditya1601 , I will try the windows technique, regarding aditya’s comment, the model doesn’t even learn to Add 1 to every number as it is supplied via the current split, I get that the rest of the sequence is ‘disconnected’ but the commonality of every 3-consecutive-numbers sets is not being picked up by the model, instead it it give something like 51, 52 … prediction = 4.23

2 Likes

Hi @GeorgeMR. There are many factors due to which the model may not be learning. For starters, the window size is too short. Try experimenting with a window size of at least 5. Try to use a simpler model and overfit it. Try to play with different optimizers and learning rates. All these hyperparameter tuning will definitely help in finding out the solution. Please share with us what worked at the end!

2 Likes

Time series are hard to process compared to other datasets. When converting the series into windows, you can keep the windows of the same sizes. If you are using tf.data.Dataset, you will set the drop_remainder=True to drop all windows not having the size you provided.

In addition to @aditya1601 ideas on improving the model, you can try to schedule the learning rate.

I had the same experiment, tweaked different hyperparameters, but what accounted for the increments in the metric (Mean absolute error) was using the Learning Rate Scheduler and keeping the model simple. It might not be the same there, but this is what I was able to capture.

If you find it hard to write a series processing function with tf.data.Dataset, you can refer to this or this (CC:@Laurence_Moroney).

1 Like