I am working on Human activity recognition via smart device sensors data by using deep learning. However, I am confused to report the results of my deep learning architecture. Therefore, I would like to request that, please guide me in this regard if it is possible. Let me share the detailed step by step:
1- Dataset description:
Data is comprised of time-series sensor data and an imbalanced Dataset. The data set contains 12 classes of data and needs prediction human physical activities.
2- DataSet Distribution
Suppose we have ten-person data. The distribution of the data sets is as follows:
- 6 for training
- 2 for Validation
- 2 for Testing
3- According to my knowledge, we Can Monitor the model performance in two ways and take action accordingly in the Human activity Recognition. These two methods are as follows:
Early Stopping Training
Early Stopping Training:
Stop training when a validation metric has stopped improving. I am using TensorFlow Keras. Here is the code for early stopping training:
early_Stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode = 'min', patience=10, restore_best_weights=True)
I am using 300 epochs, and training typically ends after 20 to 30.
After terminating the training phase, I feed test data data to the model and obtain the result. As per the code line, the model retrieves the best epoch weights (from the validation data) and tests the data (using the test data part) using those weights. I am using only test data set results for reporting.
is a callback to save the model weight during training. So the model or weights can be loaded later to continue the training from the state saved/ or use to test the testing Data.
Besides that, one other popular way to report the results in deep learning:
Feed your test data to the model after each epoch during training to obtain performance. Then, after training is complete, choose the best epoch result of the test data.
I’m looking for advice because I’m currently only using Early Stopping Training mechanism. And I’m looking for a Model checkpoint right now. However, the third method is quite popular and seems it will produce better performance values than the Early Stopping Training and Model checkpoint. So, could you please advise me on which technique I should utilize and report on?