Got nan form model prediction

import tensorflow as tf
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split

Dataset=pd.read_csv(‘train.csv’)
Dataset_droped = Dataset.drop([“PassengerId”, “Name”, “Ticket”], axis=1)
Dataset_one_hot = pd.get_dummies(Dataset_droped)

X = Dataset_one_hot.drop([“Survived”], axis=1)
y = Dataset_one_hot[“Survived”]
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.2,
random_state=42)

tf.random.set_seed(42)
model_10 = tf.keras.Sequential([
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)
])

model_10.compile(loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.Adam(),
metrics=[‘accuracy’])

history_10 = model_10.fit(X_train, y_train, epochs=10)

model_10.predict(X_test)

And then the prediction show nan. Is there a problem with this code?

You see nan values for loss and predict because your Dataset contains missing values.
Therefore you may want to drop missing values or using imputing techniques to replace missing values before using model.fit.

You can try adding Dataset.dropna(inplace=True) after reading train.csv

Dataset=pd.read_csv('/content/train.csv')
Dataset.dropna(inplace=True)

Please take a look at colab gist that illustrates my findings. Thanks!
(I assumed you are trying kaggle’s titanic dataset :sunglasses:)

2 Likes