LSTM for time series analysis

I am unable to save the training loss in history.history[‘loss’]. Please have alook at my code below.

model = Sequential()
model.add(LSTM(input_shape = (90,1), activation = ‘tanh’, units= 90, return_sequences = True))
model.add(Dropout(0.1))
model.add(LSTM(200))
model.add(Dense(1)) #tanh has been remove as you requested
model.compile(loss=“mse”, optimizer=“adam”)
history = model.fit(train_X, train_y, batch_size=128, epochs=5,validation_split=0.2, shuffle=False)
model.summary()

Epoch 1/5
55/55 [==============================] - 32s 540ms/step - loss: 0.7681 - val_loss: 1.0027
Epoch 2/5
55/55 [==============================] - 29s 528ms/step - loss: 0.7718 - val_loss: 1.0000
Epoch 3/5
55/55 [==============================] - 29s 535ms/step - loss: 0.7718 - val_loss: 0.9999
Epoch 4/5
55/55 [==============================] - 29s 532ms/step - loss: 0.7701 - val_loss: 1.0006
Epoch 5/5
55/55 [==============================] - 28s 506ms/step - loss: 0.7704 - val_loss: 0.9998

history.history[‘loss’]

[0.9976969361305237,
0.9994587898254395,
0.9989796280860901,
0.9978650808334351,
0.9971943497657776]

Why the values is different from the one printed while training? Please look into the issue.

Hi Yusuff,

Can you please share a notebook with the issue? (if the data can be shared of course, or with any random data that shows the issue)

I think that history.history[‘loss’] is printing the val_loss variable inside the training.
Try to execute history.history and look for loss variable