Loss become nan after some epochs

Hi everyone!
I’m trying to do a binary classification on a very unbalanced dataset.
The model is doing great, but after some random epochs the loss becomes nan, also precision, recall, TP, and FP, all become ZERO.
Sometimes it happens after the 3rd epoch, sometimes after the 20th epoch.

The code:

import numpy as np
import os

from keras import regularizers
from tensorflow.keras import layers
from tensorflow.keras.utils import Sequence
from tensorflow.keras.models import Sequential
import pandas


nodules_csv = pandas.read_csv("/cropped_nodules.csv")

base_dir = "/cropped_nodules/"
all_image_paths = os.listdir(base_dir)
all_image_paths = sorted(all_image_paths,key=lambda x: int(os.path.splitext(x)[0]))
nodules = nodules_csv.rename(columns = {'SN':'ID'})
labels= nodules.iloc[:,1]
labels = labels.to_numpy()

class DataGenerator(Sequence):
# Learned from https://mahmoudyusof.github.io/facial-keypoint-detection/data-generator/
  def __init__(self, all_image_paths, labels, base_dir, output_size, shuffle=False, batch_size=10):
    """
    Initializes a data generator object
      :param csv_file: file in which image names and numeric labels are stored
      :param base_dir: the directory in which all images are stored
      :param output_size: image output size after preprocessing
      :param shuffle: shuffle the data after each epoch
      :param batch_size: The size of each batch returned by __getitem__
    """
    self.imgs = all_image_paths
    self.base_dir = base_dir
    self.output_size = output_size
    self.shuffle = shuffle
    self.batch_size = batch_size
    self.labels = labels
    self.on_epoch_end()

  def on_epoch_end(self):
    self.indices = np.arange(len(self.imgs))
    if self.shuffle:
      np.random.shuffle(self.indices)

  def __len__(self):
    return int(len(self.imgs) / self.batch_size)

  def __getitem__(self, idx):
    ## Initializing Batch
    #  that one in the shape is just for a one channel images
    # if you want to use colored images you might want to set that to 3
    X = np.empty((self.batch_size, *self.output_size,1))
    # (x, y, h, w)
    y = np.empty((self.batch_size, 1))

    # get the indices of the requested batch
    indices = self.indices[idx*self.batch_size:(idx+1)*self.batch_size]

    for i, data_index in enumerate(indices):
      img_path = os.path.join(self.base_dir,
                  self.imgs[data_index])
      img = np.load(img_path)
      while img.shape == (31,31,31):
          img = np.expand_dims(img, axis=3)
      ## this is where you preprocess the image
      ## make sure to resize it to be self.output_size
          label = self.labels[data_index]
      ## if you have any preprocessing for
      ## the labels too do it here

          X[i,] = img
          y[i] = label
    return X, y


## Defining and training the model

model = Sequential([
  ## define the model's architecture

    layers.Conv3D(filters=32, kernel_size=3, activation="relu",padding='same'),
    layers.BatchNormalization(),
    layers.Conv3D(filters=32, kernel_size=3, activation="relu",padding='same'),
    layers.BatchNormalization(),
    layers.MaxPool3D(pool_size=2),
    layers.BatchNormalization(),

    layers.Conv3D(filters=64, kernel_size=3, activation="relu",padding='same'),
    layers.BatchNormalization(),
    layers.Conv3D(filters=64, kernel_size=3, activation="relu",padding='same'),
    layers.BatchNormalization(),
    layers.MaxPool3D(pool_size=2),
    layers.BatchNormalization(),

    layers.Conv3D(filters=128, kernel_size=3, activation="relu",padding='same'),
    layers.BatchNormalization(),
    layers.Conv3D(filters=128, kernel_size=3, activation="relu",padding='same'),
    layers.BatchNormalization(),
    layers.MaxPool3D(pool_size=2),
    layers.BatchNormalization(),

    layers.Conv3D(filters=256, kernel_size=3, activation="relu", padding='same'),
    layers.BatchNormalization(),
    layers.Conv3D(filters=256, kernel_size=3, activation="relu", padding='same'),
    layers.BatchNormalization(),
    layers.MaxPool3D(pool_size=2),
    layers.BatchNormalization(),

    layers.GlobalAveragePooling3D(),
    layers.Dense(units=512, activation="relu"),
    layers.BatchNormalization(),
    layers.Dropout(0.3),
    layers.Dense(units=1, activation="sigmoid"),
])


train_gen = DataGenerator(all_image_paths, labels, base_dir, (31, 31, 31), batch_size=128, shuffle=False)

## compile the model first of course

model.compile(optimizer='adam',
              loss='binary_crossentropy',
              metrics=['accuracy', 'Precision', 'Recall', 'FalseNegatives', 'FalsePositives', 'TrueNegatives', 'TruePositives'])
model.build(input_shape= (128,31,31,31,1))
model.summary()
# now let's train the model

history = model.fit(train_gen, epochs=25)

and the results below:

Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv3d (Conv3D)             (128, 31, 31, 31, 32)     896       
                                                                 
 batch_normalization (BatchN  (128, 31, 31, 31, 32)    128       
 ormalization)                                                   
                                                                 
 conv3d_1 (Conv3D)           (128, 31, 31, 31, 32)     27680     
                                                                 
 batch_normalization_1 (Batc  (128, 31, 31, 31, 32)    128       
 hNormalization)                                                 
                                                                 
 max_pooling3d (MaxPooling3D  (128, 15, 15, 15, 32)    0         
 )                                                               
                                                                 
 batch_normalization_2 (Batc  (128, 15, 15, 15, 32)    128       
 hNormalization)                                                 
                                                                 
 conv3d_2 (Conv3D)           (128, 15, 15, 15, 64)     55360     
                                                                 
 batch_normalization_3 (Batc  (128, 15, 15, 15, 64)    256       
 hNormalization)                                                 
                                                                 
 conv3d_3 (Conv3D)           (128, 15, 15, 15, 64)     110656    
                                                                 
 batch_normalization_4 (Batc  (128, 15, 15, 15, 64)    256       
 hNormalization)                                                 
                                                                 
 max_pooling3d_1 (MaxPooling  (128, 7, 7, 7, 64)       0         
 3D)                                                             
                                                                 
 batch_normalization_5 (Batc  (128, 7, 7, 7, 64)       256       
 hNormalization)                                                 
                                                                 
 conv3d_4 (Conv3D)           (128, 7, 7, 7, 128)       221312    
                                                                 
 batch_normalization_6 (Batc  (128, 7, 7, 7, 128)      512       
 hNormalization)                                                 
                                                                 
 conv3d_5 (Conv3D)           (128, 7, 7, 7, 128)       442496    
                                                                 
 batch_normalization_7 (Batc  (128, 7, 7, 7, 128)      512       
 hNormalization)                                                 
                                                                 
 max_pooling3d_2 (MaxPooling  (128, 3, 3, 3, 128)      0         
 3D)                                                             
                                                                 
 batch_normalization_8 (Batc  (128, 3, 3, 3, 128)      512       
 hNormalization)                                                 
                                                                 
 conv3d_6 (Conv3D)           (128, 3, 3, 3, 256)       884992    
                                                                 
 batch_normalization_9 (Batc  (128, 3, 3, 3, 256)      1024      
 hNormalization)                                                 
                                                                 
 conv3d_7 (Conv3D)           (128, 3, 3, 3, 256)       1769728   
                                                                 
 batch_normalization_10 (Bat  (128, 3, 3, 3, 256)      1024      
 chNormalization)                                                
                                                                 
 max_pooling3d_3 (MaxPooling  (128, 1, 1, 1, 256)      0         
 3D)                                                             
                                                                 
 batch_normalization_11 (Bat  (128, 1, 1, 1, 256)      1024      
 chNormalization)                                                
                                                                 
 global_average_pooling3d (G  (128, 256)               0         
 lobalAveragePooling3D)                                          
                                                                 
 dense (Dense)               (128, 512)                131584    
                                                                 
 batch_normalization_12 (Bat  (128, 512)               2048      
 chNormalization)                                                
                                                                 
 dropout (Dropout)           (128, 512)                0         
                                                                 
 dense_1 (Dense)             (128, 1)                  513       
                                                                 
=================================================================
Total params: 3,653,025
Trainable params: 3,649,121
Non-trainable params: 3,904
_________________________________________________________________
Epoch 1/25
2022-12-15 17:46:04.897341: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:428] Loaded cuDNN version 8401
2022-12-15 17:46:05.829836: I tensorflow/tsl/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory
2022-12-15 17:46:06.464508: I tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:630] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
2022-12-15 17:46:07.214021: I tensorflow/compiler/xla/service/service.cc:173] XLA service 0x2319ed30 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2022-12-15 17:46:07.214054: I tensorflow/compiler/xla/service/service.cc:181]   StreamExecutor device (0): NVIDIA GeForce RTX 3080, Compute Capability 8.6
2022-12-15 17:46:07.217900: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2022-12-15 17:46:07.277629: I tensorflow/tsl/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory
2022-12-15 17:46:07.317843: I tensorflow/compiler/jit/xla_compilation_cache.cc:477] Compiled cluster using XLA!  This line is logged at most once for the lifetime of the process.
5898/5898 [==============================] - 1184s 199ms/step - loss: 0.0203 - accuracy: 0.9956 - precision: 0.0807 - recall: 0.1113 - false_negatives: 1381.0000 - false_positives: 1972.0000 - true_negatives: 751418.0000 - true_positives: 173.0000
Epoch 2/25
5898/5898 [==============================] - 1178s 200ms/step - loss: 0.0068 - accuracy: 0.9984 - precision: 0.6869 - recall: 0.3779 - false_negatives: 968.0000 - false_positives: 268.0000 - true_negatives: 753120.0000 - true_positives: 588.0000
Epoch 3/25
5898/5898 [==============================] - 1178s 200ms/step - loss: 0.0052 - accuracy: 0.9986 - precision: 0.7472 - recall: 0.4782 - false_negatives: 813.0000 - false_positives: 252.0000 - true_negatives: 753134.0000 - true_positives: 745.0000
Epoch 4/25
5898/5898 [==============================] - 1181s 200ms/step - loss: 0.0045 - accuracy: 0.9987 - precision: 0.7676 - recall: 0.5540 - false_negatives: 694.0000 - false_positives: 261.0000 - true_negatives: 753127.0000 - true_positives: 862.0000
Epoch 5/25
5898/5898 [==============================] - 1181s 200ms/step - loss: 0.0039 - accuracy: 0.9988 - precision: 0.7913 - recall: 0.5963 - false_negatives: 629.0000 - false_positives: 245.0000 - true_negatives: 753141.0000 - true_positives: 929.0000
Epoch 6/25
5898/5898 [==============================] - 1178s 200ms/step - loss: 0.0033 - accuracy: 0.9990 - precision: 0.8080 - recall: 0.6465 - false_negatives: 550.0000 - false_positives: 239.0000 - true_negatives: 753149.0000 - true_positives: 1006.0000
Epoch 7/25
5898/5898 [==============================] - 1178s 200ms/step - loss: 0.0029 - accuracy: 0.9990 - precision: 0.8178 - recall: 0.6913 - false_negatives: 481.0000 - false_positives: 240.0000 - true_negatives: 753146.0000 - true_positives: 1077.0000
Epoch 8/25
5898/5898 [==============================] - 1181s 200ms/step - loss: 0.0024 - accuracy: 0.9992 - precision: 0.8452 - recall: 0.7530 - false_negatives: 385.0000 - false_positives: 215.0000 - true_negatives: 753170.0000 - true_positives: 1174.0000
Epoch 9/25
5898/5898 [==============================] - 1177s 200ms/step - loss: 0.0018 - accuracy: 0.9993 - precision: 0.8632 - recall: 0.8077 - false_negatives: 299.0000 - false_positives: 199.0000 - true_negatives: 753190.0000 - true_positives: 1256.0000
Epoch 10/25
5898/5898 [==============================] - 1180s 200ms/step - loss: 0.0014 - accuracy: 0.9995 - precision: 0.9055 - recall: 0.8508 - false_negatives: 232.0000 - false_positives: 138.0000 - true_negatives: 753251.0000 - true_positives: 1323.0000
Epoch 11/25
5898/5898 [==============================] - 1181s 200ms/step - loss: 0.0014 - accuracy: 0.9995 - precision: 0.9086 - recall: 0.8678 - false_negatives: 206.0000 - false_positives: 136.0000 - true_negatives: 753250.0000 - true_positives: 1352.0000
Epoch 12/25
5898/5898 [==============================] - 1178s 200ms/step - loss: 0.0011 - accuracy: 0.9996 - precision: 0.9207 - recall: 0.8952 - false_negatives: 163.0000 - false_positives: 120.0000 - true_negatives: 753268.0000 - true_positives: 1393.0000
Epoch 13/25
5898/5898 [==============================] - 1182s 200ms/step - loss: 8.5650e-04 - accuracy: 0.9997 - precision: 0.9382 - recall: 0.9177 - false_negatives: 128.0000 - false_positives: 94.0000 - true_negatives: 753294.0000 - true_positives: 1428.0000
Epoch 14/25
5898/5898 [==============================] - 1179s 200ms/step - loss: 7.9298e-04 - accuracy: 0.9998 - precision: 0.9509 - recall: 0.9326 - false_negatives: 105.0000 - false_positives: 75.0000 - true_negatives: 753312.0000 - true_positives: 1452.0000
Epoch 15/25
5898/5898 [==============================] - 1179s 200ms/step - loss: 7.1897e-04 - accuracy: 0.9998 - precision: 0.9576 - recall: 0.9422 - false_negatives: 90.0000 - false_positives: 65.0000 - true_negatives: 753322.0000 - true_positives: 1467.0000
Epoch 16/25
5898/5898 [==============================] - 1181s 200ms/step - loss: 6.0985e-04 - accuracy: 0.9998 - precision: 0.9567 - recall: 0.9499 - false_negatives: 78.0000 - false_positives: 67.0000 - true_negatives: 753320.0000 - true_positives: 1479.0000
Epoch 17/25
5898/5898 [==============================] - 1182s 200ms/step - loss: 6.1805e-04 - accuracy: 0.9998 - precision: 0.9648 - recall: 0.9499 - false_negatives: 78.0000 - false_positives: 54.0000 - true_negatives: 753332.0000 - true_positives: 1480.0000
Epoch 18/25
5898/5898 [==============================] - 1182s 200ms/step - loss: 4.7617e-04 - accuracy: 0.9998 - precision: 0.9657 - recall: 0.9595 - false_negatives: 63.0000 - false_positives: 53.0000 - true_negatives: 753336.0000 - true_positives: 1492.0000
Epoch 19/25
5898/5898 [==============================] - 1196s 203ms/step - loss: 5.4637e-04 - accuracy: 0.9998 - precision: 0.9637 - recall: 0.9563 - false_negatives: 68.0000 - false_positives: 56.0000 - true_negatives: 753332.0000 - true_positives: 1488.0000
Epoch 20/25
5898/5898 [==============================] - 1748s 296ms/step - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1557.0000 - false_positives: 0.0000e+00 - true_negatives: 753387.0000 - true_positives: 0.0000e+00
Epoch 21/25
5898/5898 [==============================] - 1150s 195ms/step - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1557.0000 - false_positives: 0.0000e+00 - true_negatives: 753387.0000 - true_positives: 0.0000e+00
Epoch 22/25
5898/5898 [==============================] - 1145s 194ms/step - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1558.0000 - false_positives: 0.0000e+00 - true_negatives: 753386.0000 - true_positives: 0.0000e+00
Epoch 23/25
5898/5898 [==============================] - 1145s 194ms/step - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1555.0000 - false_positives: 0.0000e+00 - true_negatives: 753389.0000 - true_positives: 0.0000e+00
Epoch 24/25
5898/5898 [==============================] - 1146s 194ms/step - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1558.0000 - false_positives: 0.0000e+00 - true_negatives: 753386.0000 - true_positives: 0.0000e+00
Epoch 25/25
5898/5898 [==============================] - 1148s 195ms/step - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1557.0000 - false_positives: 0.0000e+00 - true_negatives: 753387.0000 - true_positives: 0.0000e+00

I have changed most of the code, and still gets NAN after a few epochs!
I added a validation set (randomly taking 10% of each normal and abnormal data), added a decay learning rate, and tried many solutions found on StackOverflow, but no luck!

@Mustafa_Mahmood,

To avoid nan loss ensure :

  1. Check that your training data is properly scaled and doesn’t contain nans
  2. Check if you are facing the exploding gradient problem use gradient clipping

Thank you!

1 Like

Thanks for your answer.
I’ve added condition to exclude NAN data from data generator, and added clipping by norm=0.001, but the same thing happened.
Here is the output:

Model: "3dcnn"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_1 (InputLayer)        [(None, 31, 31, 31, 1)]   0         
                                                                 
 conv3d (Conv3D)             (None, 31, 31, 31, 64)    1792      
                                                                 
 max_pooling3d (MaxPooling3D  (None, 15, 15, 15, 64)   0         
 )                                                               
                                                                 
 batch_normalization (BatchN  (None, 15, 15, 15, 64)   256       
 ormalization)                                                   
                                                                 
 conv3d_1 (Conv3D)           (None, 15, 15, 15, 64)    110656    
                                                                 
 max_pooling3d_1 (MaxPooling  (None, 7, 7, 7, 64)      0         
 3D)                                                             
                                                                 
 batch_normalization_1 (Batc  (None, 7, 7, 7, 64)      256       
 hNormalization)                                                 
                                                                 
 conv3d_2 (Conv3D)           (None, 7, 7, 7, 128)      221312    
                                                                 
 max_pooling3d_2 (MaxPooling  (None, 3, 3, 3, 128)     0         
 3D)                                                             
                                                                 
 batch_normalization_2 (Batc  (None, 3, 3, 3, 128)     512       
 hNormalization)                                                 
                                                                 
 conv3d_3 (Conv3D)           (None, 3, 3, 3, 256)      884992    
                                                                 
 max_pooling3d_3 (MaxPooling  (None, 1, 1, 1, 256)     0         
 3D)                                                             
                                                                 
 batch_normalization_3 (Batc  (None, 1, 1, 1, 256)     1024      
 hNormalization)                                                 
                                                                 
 global_average_pooling3d (G  (None, 256)              0         
 lobalAveragePooling3D)                                          
                                                                 
 dense (Dense)               (None, 512)               131584    
                                                                 
 dropout (Dropout)           (None, 512)               0         
                                                                 
 dense_1 (Dense)             (None, 1)                 513       
                                                                 
=================================================================
Total params: 1,352,897
Trainable params: 1,351,873
Non-trainable params: 1,024
_________________________________________________________________
Epoch 1/100
2023-02-10 03:48:31.339471: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:428] Loaded cuDNN version 8401
2023-02-10 03:48:32.663116: I tensorflow/tsl/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory
2023-02-10 03:48:33.046583: I tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:630] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
2023-02-10 03:48:33.787152: I tensorflow/compiler/xla/service/service.cc:173] XLA service 0x1f638340 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2023-02-10 03:48:33.787179: I tensorflow/compiler/xla/service/service.cc:181]   StreamExecutor device (0): NVIDIA GeForce RTX 3080, Compute Capability 8.6
2023-02-10 03:48:33.814579: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2023-02-10 03:48:34.003245: I tensorflow/tsl/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory
2023-02-10 03:48:34.041784: I tensorflow/compiler/jit/xla_compilation_cache.cc:477] Compiled cluster using XLA!  This line is logged at most once for the lifetime of the process.
5308/5308 [==============================] - 7881s 1s/step - loss: 0.0179 - accuracy: 0.9974 - precision: 0.2611 - recall: 0.1468 - false_negatives: 1197.0000 - false_positives: 583.0000 - true_negatives: 677438.0000 - true_positives: 206.0000 - val_loss: 0.0129 - val_accuracy: 0.9985 - val_precision: 0.9565 - val_recall: 0.2821 - val_false_negatives: 112.0000 - val_false_positives: 2.0000 - val_true_negatives: 75234.0000 - val_true_positives: 44.0000
Epoch 2/100
5308/5308 [==============================] - 8440s 2s/step - loss: 0.0109 - accuracy: 0.9986 - precision: 0.8600 - recall: 0.4024 - false_negatives: 839.0000 - false_positives: 92.0000 - true_negatives: 677928.0000 - true_positives: 565.0000 - val_loss: 0.0100 - val_accuracy: 0.9988 - val_precision: 0.9000 - val_recall: 0.4645 - val_false_negatives: 83.0000 - val_false_positives: 8.0000 - val_true_negatives: 75229.0000 - val_true_positives: 72.0000
Epoch 3/100
5308/5308 [==============================] - 8503s 2s/step - loss: 0.0092 - accuracy: 0.9989 - precision: 0.8698 - recall: 0.5243 - false_negatives: 667.0000 - false_positives: 110.0000 - true_negatives: 677912.0000 - true_positives: 735.0000 - val_loss: 0.0126 - val_accuracy: 0.9987 - val_precision: 0.9130 - val_recall: 0.4038 - val_false_negatives: 93.0000 - val_false_positives: 6.0000 - val_true_negatives: 75230.0000 - val_true_positives: 63.0000
Epoch 4/100
5308/5308 [==============================] - 7739s 1s/step - loss: 0.0076 - accuracy: 0.9990 - precision: 0.8734 - recall: 0.5960 - false_negatives: 566.0000 - false_positives: 121.0000 - true_negatives: 677902.0000 - true_positives: 835.0000 - val_loss: 0.0106 - val_accuracy: 0.9988 - val_precision: 0.8131 - val_recall: 0.5613 - val_false_negatives: 68.0000 - val_false_positives: 20.0000 - val_true_negatives: 75217.0000 - val_true_positives: 87.0000
Epoch 5/100
5308/5308 [==============================] - 7774s 1s/step - loss: 0.0064 - accuracy: 0.9991 - precision: 0.8922 - recall: 0.6372 - false_negatives: 509.0000 - false_positives: 108.0000 - true_negatives: 677913.0000 - true_positives: 894.0000 - val_loss: 0.0098 - val_accuracy: 0.9988 - val_precision: 0.7742 - val_recall: 0.6115 - val_false_negatives: 61.0000 - val_false_positives: 28.0000 - val_true_negatives: 75207.0000 - val_true_positives: 96.0000
Epoch 6/100
5308/5308 [==============================] - 7823s 1s/step - loss: 0.0050 - accuracy: 0.9993 - precision: 0.9175 - recall: 0.7140 - false_negatives: 401.0000 - false_positives: 90.0000 - true_negatives: 677932.0000 - true_positives: 1001.0000 - val_loss: 0.0096 - val_accuracy: 0.9987 - val_precision: 0.7424 - val_recall: 0.6164 - val_false_negatives: 61.0000 - val_false_positives: 34.0000 - val_true_negatives: 75199.0000 - val_true_positives: 98.0000
Epoch 7/100
5308/5308 [==============================] - 7775s 1s/step - loss: 0.0041 - accuracy: 0.9994 - precision: 0.9312 - recall: 0.7817 - false_negatives: 306.0000 - false_positives: 81.0000 - true_negatives: 677941.0000 - true_positives: 1096.0000 - val_loss: 0.0136 - val_accuracy: 0.9988 - val_precision: 0.8077 - val_recall: 0.5385 - val_false_negatives: 72.0000 - val_false_positives: 20.0000 - val_true_negatives: 75216.0000 - val_true_positives: 84.0000
Epoch 8/100
5308/5308 [==============================] - 7802s 1s/step - loss: 0.0029 - accuracy: 0.9995 - precision: 0.9398 - recall: 0.8333 - false_negatives: 234.0000 - false_positives: 75.0000 - true_negatives: 677945.0000 - true_positives: 1170.0000 - val_loss: 0.0137 - val_accuracy: 0.9988 - val_precision: 0.7944 - val_recall: 0.5449 - val_false_negatives: 71.0000 - val_false_positives: 22.0000 - val_true_negatives: 75214.0000 - val_true_positives: 85.0000
Epoch 9/100
5308/5308 [==============================] - 7806s 1s/step - loss: 0.0025 - accuracy: 0.9997 - precision: 0.9608 - recall: 0.8751 - false_negatives: 175.0000 - false_positives: 50.0000 - true_negatives: 677973.0000 - true_positives: 1226.0000 - val_loss: 0.0140 - val_accuracy: 0.9986 - val_precision: 0.6709 - val_recall: 0.6752 - val_false_negatives: 51.0000 - val_false_positives: 52.0000 - val_true_negatives: 75183.0000 - val_true_positives: 106.0000
Epoch 10/100
5308/5308 [==============================] - 8430s 2s/step - loss: 0.0022 - accuracy: 0.9997 - precision: 0.9577 - recall: 0.9024 - false_negatives: 137.0000 - false_positives: 56.0000 - true_negatives: 677964.0000 - true_positives: 1267.0000 - val_loss: 0.0160 - val_accuracy: 0.9987 - val_precision: 0.7500 - val_recall: 0.5769 - val_false_negatives: 66.0000 - val_false_positives: 30.0000 - val_true_negatives: 75206.0000 - val_true_positives: 90.0000
Epoch 11/100
5308/5308 [==============================] - 7898s 1s/step - loss: 0.0020 - accuracy: 0.9997 - precision: 0.9627 - recall: 0.9030 - false_negatives: 136.0000 - false_positives: 49.0000 - true_negatives: 677973.0000 - true_positives: 1266.0000 - val_loss: 0.0198 - val_accuracy: 0.9986 - val_precision: 0.7107 - val_recall: 0.5478 - val_false_negatives: 71.0000 - val_false_positives: 35.0000 - val_true_negatives: 75200.0000 - val_true_positives: 86.0000
Epoch 12/100
5308/5308 [==============================] - 7902s 1s/step - loss: 0.0015 - accuracy: 0.9998 - precision: 0.9637 - recall: 0.9286 - false_negatives: 100.0000 - false_positives: 49.0000 - true_negatives: 677975.0000 - true_positives: 1300.0000 - val_loss: 0.0203 - val_accuracy: 0.9987 - val_precision: 0.7395 - val_recall: 0.5641 - val_false_negatives: 68.0000 - val_false_positives: 31.0000 - val_true_negatives: 75205.0000 - val_true_positives: 88.0000
Epoch 13/100
5308/5308 [==============================] - 8554s 2s/step - loss: 0.0015 - accuracy: 0.9998 - precision: 0.9694 - recall: 0.9280 - false_negatives: 101.0000 - false_positives: 41.0000 - true_negatives: 677981.0000 - true_positives: 1301.0000 - val_loss: 0.0219 - val_accuracy: 0.9988 - val_precision: 0.7913 - val_recall: 0.5796 - val_false_negatives: 66.0000 - val_false_positives: 24.0000 - val_true_negatives: 75211.0000 - val_true_positives: 91.0000
Epoch 14/100
5308/5308 [==============================] - 8412s 2s/step - loss: 0.0017 - accuracy: 0.9998 - precision: 0.9669 - recall: 0.9345 - false_negatives: 92.0000 - false_positives: 45.0000 - true_negatives: 677974.0000 - true_positives: 1313.0000 - val_loss: 0.0225 - val_accuracy: 0.9987 - val_precision: 0.7692 - val_recall: 0.5128 - val_false_negatives: 76.0000 - val_false_positives: 24.0000 - val_true_negatives: 75212.0000 - val_true_positives: 80.0000
Epoch 15/100
5308/5308 [==============================] - 7952s 1s/step - loss: 0.0013 - accuracy: 0.9998 - precision: 0.9800 - recall: 0.9444 - false_negatives: 78.0000 - false_positives: 27.0000 - true_negatives: 677995.0000 - true_positives: 1324.0000 - val_loss: 0.0219 - val_accuracy: 0.9986 - val_precision: 0.7031 - val_recall: 0.5769 - val_false_negatives: 66.0000 - val_false_positives: 38.0000 - val_true_negatives: 75198.0000 - val_true_positives: 90.0000
Epoch 16/100
5308/5308 [==============================] - 7956s 1s/step - loss: 0.0011 - accuracy: 0.9998 - precision: 0.9758 - recall: 0.9486 - false_negatives: 72.0000 - false_positives: 33.0000 - true_negatives: 677991.0000 - true_positives: 1328.0000 - val_loss: 0.0243 - val_accuracy: 0.9986 - val_precision: 0.7411 - val_recall: 0.5321 - val_false_negatives: 73.0000 - val_false_positives: 29.0000 - val_true_negatives: 75207.0000 - val_true_positives: 83.0000
Epoch 17/100
5308/5308 [==============================] - 7907s 1s/step - loss: 0.0011 - accuracy: 0.9999 - precision: 0.9762 - recall: 0.9636 - false_negatives: 51.0000 - false_positives: 33.0000 - true_negatives: 677989.0000 - true_positives: 1351.0000 - val_loss: 0.0239 - val_accuracy: 0.9987 - val_precision: 0.7029 - val_recall: 0.6178 - val_false_negatives: 60.0000 - val_false_positives: 41.0000 - val_true_negatives: 75194.0000 - val_true_positives: 97.0000
Epoch 18/100
5308/5308 [==============================] - 7778s 1s/step - loss: 8.2318e-04 - accuracy: 0.9999 - precision: 0.9760 - recall: 0.9586 - false_negatives: 58.0000 - false_positives: 33.0000 - true_negatives: 677990.0000 - true_positives: 1343.0000 - val_loss: 0.0253 - val_accuracy: 0.9988 - val_precision: 0.7815 - val_recall: 0.5962 - val_false_negatives: 63.0000 - val_false_positives: 26.0000 - val_true_negatives: 75210.0000 - val_true_positives: 93.0000
Epoch 19/100
5308/5308 [==============================] - 7892s 1s/step - loss: 9.6254e-04 - accuracy: 0.9999 - precision: 0.9790 - recall: 0.9615 - false_negatives: 54.0000 - false_positives: 29.0000 - true_negatives: 677991.0000 - true_positives: 1350.0000 - val_loss: 0.0227 - val_accuracy: 0.9987 - val_precision: 0.7652 - val_recall: 0.5641 - val_false_negatives: 68.0000 - val_false_positives: 27.0000 - val_true_negatives: 75209.0000 - val_true_positives: 88.0000
Epoch 20/100
5308/5308 [==============================] - 8408s 2s/step - loss: 6.6576e-04 - accuracy: 0.9999 - precision: 0.9855 - recall: 0.9686 - false_negatives: 44.0000 - false_positives: 20.0000 - true_negatives: 678003.0000 - true_positives: 1357.0000 - val_loss: 0.0261 - val_accuracy: 0.9987 - val_precision: 0.7402 - val_recall: 0.6026 - val_false_negatives: 62.0000 - val_false_positives: 33.0000 - val_true_negatives: 75203.0000 - val_true_positives: 94.0000
Epoch 21/100
5308/5308 [==============================] - 8073s 2s/step - loss: 8.1772e-04 - accuracy: 0.9999 - precision: 0.9827 - recall: 0.9708 - false_negatives: 41.0000 - false_positives: 24.0000 - true_negatives: 677997.0000 - true_positives: 1362.0000 - val_loss: 0.0298 - val_accuracy: 0.9987 - val_precision: 0.7864 - val_recall: 0.5192 - val_false_negatives: 75.0000 - val_false_positives: 22.0000 - val_true_negatives: 75214.0000 - val_true_positives: 81.0000
Epoch 22/100
5308/5308 [==============================] - 7927s 1s/step - loss: 7.8629e-04 - accuracy: 0.9999 - precision: 0.9891 - recall: 0.9729 - false_negatives: 38.0000 - false_positives: 15.0000 - true_negatives: 678005.0000 - true_positives: 1366.0000 - val_loss: 0.0264 - val_accuracy: 0.9988 - val_precision: 0.7870 - val_recall: 0.5484 - val_false_negatives: 70.0000 - val_false_positives: 23.0000 - val_true_negatives: 75214.0000 - val_true_positives: 85.0000
Epoch 23/100
5308/5308 [==============================] - 8023s 2s/step - loss: 6.8606e-04 - accuracy: 0.9999 - precision: 0.9841 - recall: 0.9736 - false_negatives: 37.0000 - false_positives: 22.0000 - true_negatives: 678000.0000 - true_positives: 1365.0000 - val_loss: 0.0249 - val_accuracy: 0.9987 - val_precision: 0.7063 - val_recall: 0.6474 - val_false_negatives: 55.0000 - val_false_positives: 42.0000 - val_true_negatives: 75194.0000 - val_true_positives: 101.0000
Epoch 24/100
5308/5308 [==============================] - 8242s 2s/step - loss: 6.5897e-04 - accuracy: 0.9999 - precision: 0.9920 - recall: 0.9750 - false_negatives: 35.0000 - false_positives: 11.0000 - true_negatives: 678011.0000 - true_positives: 1367.0000 - val_loss: 0.0242 - val_accuracy: 0.9986 - val_precision: 0.6600 - val_recall: 0.6346 - val_false_negatives: 57.0000 - val_false_positives: 51.0000 - val_true_negatives: 75185.0000 - val_true_positives: 99.0000
Epoch 25/100
5308/5308 [==============================] - 8523s 2s/step - loss: 5.7305e-04 - accuracy: 0.9999 - precision: 0.9892 - recall: 0.9793 - false_negatives: 29.0000 - false_positives: 15.0000 - true_negatives: 678008.0000 - true_positives: 1372.0000 - val_loss: 0.0273 - val_accuracy: 0.9988 - val_precision: 0.8182 - val_recall: 0.5192 - val_false_negatives: 75.0000 - val_false_positives: 18.0000 - val_true_negatives: 75218.0000 - val_true_positives: 81.0000
Epoch 26/100
5308/5308 [==============================] - 8198s 2s/step - loss: 5.6548e-04 - accuracy: 0.9999 - precision: 0.9921 - recall: 0.9807 - false_negatives: 27.0000 - false_positives: 11.0000 - true_negatives: 678012.0000 - true_positives: 1374.0000 - val_loss: 0.0285 - val_accuracy: 0.9988 - val_precision: 0.7627 - val_recall: 0.5844 - val_false_negatives: 64.0000 - val_false_positives: 28.0000 - val_true_negatives: 75210.0000 - val_true_positives: 90.0000
Epoch 27/100
5308/5308 [==============================] - 7946s 1s/step - loss: 8.0337e-04 - accuracy: 0.9999 - precision: 0.9871 - recall: 0.9758 - false_negatives: 34.0000 - false_positives: 18.0000 - true_negatives: 678000.0000 - true_positives: 1372.0000 - val_loss: 0.0311 - val_accuracy: 0.9987 - val_precision: 0.7925 - val_recall: 0.5283 - val_false_negatives: 75.0000 - val_false_positives: 22.0000 - val_true_negatives: 75211.0000 - val_true_positives: 84.0000
Epoch 28/100
5308/5308 [==============================] - 7934s 1s/step - loss: 6.6470e-04 - accuracy: 0.9999 - precision: 0.9850 - recall: 0.9836 - false_negatives: 23.0000 - false_positives: 21.0000 - true_negatives: 678001.0000 - true_positives: 1379.0000 - val_loss: 0.0250 - val_accuracy: 0.9988 - val_precision: 0.7519 - val_recall: 0.6218 - val_false_negatives: 59.0000 - val_false_positives: 32.0000 - val_true_negatives: 75204.0000 - val_true_positives: 97.0000
Epoch 29/100
5308/5308 [==============================] - 7947s 1s/step - loss: 4.6816e-04 - accuracy: 0.9999 - precision: 0.9949 - recall: 0.9807 - false_negatives: 27.0000 - false_positives: 7.0000 - true_negatives: 678015.0000 - true_positives: 1375.0000 - val_loss: 0.0247 - val_accuracy: 0.9989 - val_precision: 0.7647 - val_recall: 0.6710 - val_false_negatives: 51.0000 - val_false_positives: 32.0000 - val_true_negatives: 75205.0000 - val_true_positives: 104.0000
Epoch 30/100
5308/5308 [==============================] - 7878s 1s/step - loss: nan - accuracy: 0.9979 - precision: 1.0000 - recall: 0.0014 - false_negatives: 1402.0000 - false_positives: 0.0000e+00 - true_negatives: 678020.0000 - true_positives: 2.0000 - val_loss: nan - val_accuracy: 0.9979 - val_precision: 0.0000e+00 - val_recall: 0.0000e+00 - val_false_negatives: 156.0000 - val_false_positives: 0.0000e+00 - val_true_negatives: 75236.0000 - val_true_positives: 0.0000e+00
Epoch 31/100
5308/5308 [==============================] - 8580s 2s/step - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1400.0000 - false_positives: 0.0000e+00 - true_negatives: 678024.0000 - true_positives: 0.0000e+00 - val_loss: nan - val_accuracy: 0.9979 - val_precision: 0.0000e+00 - val_recall: 0.0000e+00 - val_false_negatives: 156.0000 - val_false_positives: 0.0000e+00 - val_true_negatives: 75236.0000 - val_true_positives: 0.0000e+00
Epoch 32/100
5308/5308 [==============================] - 8046s 2s/step - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1405.0000 - false_positives: 0.0000e+00 - true_negatives: 678019.0000 - true_positives: 0.0000e+00 - val_loss: nan - val_accuracy: 0.9979 - val_precision: 0.0000e+00 - val_recall: 0.0000e+00 - val_false_negatives: 156.0000 - val_false_positives: 0.0000e+00 - val_true_negatives: 75236.0000 - val_true_positives: 0.0000e+00
Epoch 33/100
5308/5308 [==============================] - 8095s 2s/step - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1402.0000 - false_positives: 0.0000e+00 - true_negatives: 678022.0000 - true_positives: 0.0000e+00 - val_loss: nan - val_accuracy: 0.9979 - val_precision: 0.0000e+00 - val_recall: 0.0000e+00 - val_false_negatives: 157.0000 - val_false_positives: 0.0000e+00 - val_true_negatives: 75235.0000 - val_true_positives: 0.0000e+00
Epoch 34/100
5308/5308 [==============================] - 7894s 1s/step - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1401.0000 - false_positives: 0.0000e+00 - true_negatives: 678023.0000 - true_positives: 0.0000e+00 - val_loss: nan - val_accuracy: 0.9979 - val_precision: 0.0000e+00 - val_recall: 0.0000e+00 - val_false_negatives: 156.0000 - val_false_positives: 0.0000e+00 - val_true_negatives: 75236.0000 - val_true_positives: 0.0000e+00
Epoch 35/100
5308/5308 [==============================] - 8324s 2s/step - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1403.0000 - false_positives: 0.0000e+00 - true_negatives: 678021.0000 - true_positives: 0.0000e+00 - val_loss: nan - val_accuracy: 0.9979 - val_precision: 0.0000e+00 - val_recall: 0.0000e+00 - val_false_negatives: 156.0000 - val_false_positives: 0.0000e+00 - val_true_negatives: 75236.0000 - val_true_positives: 0.0000e+00
Epoch 36/100
5308/5308 [==============================] - 8429s 2s/step - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1403.0000 - false_positives: 0.0000e+00 - true_negatives: 678021.0000 - true_positives: 0.0000e+00 - val_loss: nan - val_accuracy: 0.9979 - val_precision: 0.0000e+00 - val_recall: 0.0000e+00 - val_false_negatives: 156.0000 - val_false_positives: 0.0000e+00 - val_true_negatives: 75236.0000 - val_true_positives: 0.0000e+00
Epoch 37/100
5308/5308 [==============================] - 8460s 2s/step - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1407.0000 - false_positives: 0.0000e+00 - true_negatives: 678017.0000 - true_positives: 0.0000e+00 - val_loss: nan - val_accuracy: 0.9979 - val_precision: 0.0000e+00 - val_recall: 0.0000e+00 - val_false_negatives: 157.0000 - val_false_positives: 0.0000e+00 - val_true_negatives: 75235.0000 - val_true_positives: 0.0000e+00
Epoch 38/100
5308/5308 [==============================] - 8028s 2s/step - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1401.0000 - false_positives: 0.0000e+00 - true_negatives: 678023.0000 - true_positives: 0.0000e+00 - val_loss: nan - val_accuracy: 0.9979 - val_precision: 0.0000e+00 - val_recall: 0.0000e+00 - val_false_negatives: 157.0000 - val_false_positives: 0.0000e+00 - val_true_negatives: 75235.0000 - val_true_positives: 0.0000e+00
Epoch 39/100
5308/5308 [==============================] - 7868s 1s/step - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1402.0000 - false_positives: 0.0000e+00 - true_negatives: 678022.0000 - true_positives: 0.0000e+00 - val_loss: nan - val_accuracy: 0.9979 - val_precision: 0.0000e+00 - val_recall: 0.0000e+00 - val_false_negatives: 157.0000 - val_false_positives: 0.0000e+00 - val_true_negatives: 75235.0000 - val_true_positives: 0.0000e+00
Epoch 40/100
5308/5308 [==============================] - 8046s 2s/step - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1403.0000 - false_positives: 0.0000e+00 - true_negatives: 678021.0000 - true_positives: 0.0000e+00 - val_loss: nan - val_accuracy: 0.9979 - val_precision: 0.0000e+00 - val_recall: 0.0000e+00 - val_false_negatives: 155.0000 - val_false_positives: 0.0000e+00 - val_true_negatives: 75237.0000 - val_true_positives: 0.0000e+00
Epoch 41/100
5308/5308 [==============================] - ETA: 0s - loss: nan - accuracy: 0.9979 - precision: 0.0000e+00 - recall: 0.0000e+00 - false_negatives: 1402.0000 - false_positives: 0.0000e+00 - true_negatives: 678022.0000 - true_positives: 0.0000e+00
Process finished with exit code -1

It looks like you have been stuck with this issue for some time now.
In your original question, you mentioned your dataset is unbalanced, but there is not much comment about it afterwards. Basic question: have you tried out to cope with this imbalance (e.g. Classification on imbalanced data)?

1 Like

Hi, thanks for your help.

The data are 3d lung nodule images saved as (.npy):
total number of data = 754980
size of each image (31,31,31)
number of images of class 1 = 1186
number of images of class 0 = 753794

I have checked the link that you post it, some of the steps I already implemented, I’ll read it again more carefully and see if that helps me.

Some of what I did to cope with the imbalanced data:

  1. I’m using binary crossentropy loss
  2. splitting data randomly to 10% validating 90% training ( data generator making the training set contains 90% of 0 class, and 90% of class 1, and doing the same for validating data)]
  3. using metrics (TP, TN, FN, FP, Precision, recall, accuracy)
  4. data are normalized
  5. batch size is the largest that the RAM can handle (128)