About the reason why the accuracy of the model is fixed

We created our own dataset, not the general dataset.

The final result of the model is Binary Classification, and as a result of learning, a problem arose where the accuracy of the model was fixed.

If learning is thought to be progressing as the loss value changes significantly, why is the accuracy fixed?

The current model layer is a simple MLP (the activation function of the final layer is sigmoid, the optimizer is adam, and the loss is binary_crossentropy). We are currently planning to add a Feature Extraction Layer (ex:conv).

Upload the log together.

best regards

hi @William0127 .
Also notice the huge jumps of loss values. This suggests something is not quite right in your model/training/hyperparameters anyways.
Can you please tell more about your data, share your code, … so that people are better able to help you?
Thank you.

Thank you for the reply.

First of all, let’s talk about the dataset. This is a self-created dataset, as shown in the image below.

It consists of label information and 7 parameters. To explain the data in more detail, Columns 1, 2, and 3 operate like a for statement.

for _ in columns1 :
for _ in columns2 :
for _ in columns3 :

The values in columns 4 to 7 are measurement values for the operations in columns 1 to 3.
The amount of data set is approximately 550,000, so it is believed that there are no issues with the amount.
I am aware that there may be problems with the data (for example, the dataset structure is incorrect).

Then I added the code below.

import tensorflow as tf
import numpy as np
import pandas as pd

from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.layers import Dense, Flatten, Conv1D, MaxPool1D
from tensorflow.keras.losses import binary_crossentropy

df = pd.read_csv(‘./…/…/…/…/col_rx_dataset_rev1.csv’)

df_columns = df.columns.unique()

train_data = df[df_columns[1:]]
label_data = df[df_columns[0]]


x_train, x_test, y_train, y_test = train_test_split(train_data, label_data, test_size= 0.3, random_state=42)

x_train, x_valid, y_train, y_valid = train_test_split(x_train, y_train, test_size= 0.4, random_state=42)

model = Sequential()
model.add(Dense(256, activation=‘relu’))
model.add(Dense(128, activation=‘relu’))
model.add(Dense(64, activation=‘relu’))
model.add(Dense(64, activation=‘relu’))
model.add(Dense(1, activation=‘sigmoid’))


model.fit(x_train, y_train,validation_data=(x_valid,y_valid), epochs=100, batch_size=64, verbose=1)

I plan to add a Feature Extraction layer after checking how much learning is possible with the MLP model. However, in my simple MLP, there was no change in accuracy.

The photo cannot be uploaded, so I paste part of the CSV file.

|label |Columns1|Columns2|Columns3|Columns4|Columns5|Columns6|Columns7|

Hi @William0127 ,

There are a few reasons why the accuracy of your model may be fixed, even though the loss value is changing significantly.

One possibility is that the model is overfitting to the training data. This means that the model is learning the training data too well, and is not able to generalize to new data.

Another possibility is that the model is underfitting the training data. This means that the model is not learning the training data well enough, and is not able to make accurate predictions on new data.

Finally, it is also possible that the model is simply not complex enough to learn the relationship between the input features and the output labels.

For improving the accuracy of your model:

1.Make sure that your training data is well-labeled.
2. Use a large training dataset. The more data that you have, the better the model will be able to learn the relationship between the input features and the output labels.
3. Use regularization techniques.
4. Try increasing the number of layers in your model.
5. Use a different optimizer.
6. Try using a different loss function.

I hope this helps!