After going through the TensorFlow text classification with RNNs, I was wondering how I could get a sense of probability from a prediction output. For some background, it is a binary sentiment classification with the final layer being a Dense(1), and binary cross entropy with logits.
Because the final layer outputs logits, these cannot be directly interpreted as probabilities. As the tutorial points out “outputs >= 0 are positive”. However to get a sense of certainty of the model output, I had an idea of obtaining something that might come close:
From the validation dataset I order all of the logits from lowest to highest. Then, find the index of the value that is closest to the prediction output. The place where this index falls (rescaled to percentages) is the percentage of certainty.
val_results = model.predict(validation_dataset) predictions = model.predict(np.array(["This was a good movie."])) number = min(val_results, key=lambda x:abs(x-predictions)) index = np.where(sorted(val_results) == number) print(index/len(val_results))
Is this reasoning sound?