Valid reasoning to get probability? NLP classification

Hi there,

After going through the TensorFlow text classification with RNNs, I was wondering how I could get a sense of probability from a prediction output. For some background, it is a binary sentiment classification with the final layer being a Dense(1), and binary cross entropy with logits.

Because the final layer outputs logits, these cannot be directly interpreted as probabilities. As the tutorial points out “outputs >= 0 are positive”. However to get a sense of certainty of the model output, I had an idea of obtaining something that might come close:

From the validation dataset I order all of the logits from lowest to highest. Then, find the index of the value that is closest to the prediction output. The place where this index falls (rescaled to percentages) is the percentage of certainty.

Code:

val_results = model.predict(validation_dataset)
predictions = model.predict(np.array(["This was a good movie."]))
number = min(val_results, key=lambda x:abs(x-predictions))
index = np.where(sorted(val_results) == number)
print(index[0]/len(val_results))

Is this reasoning sound?
Thanks :slight_smile:

EDIT:
Since I am dealing with a binary classification, I could also use a sigmoid activation function in the output Dense layer, and correspondingly put from_logits=False in the binary_crossentropy loss function. This does give me an output from 0 to 1.

I would replace the final layer by Dense(2, activation=‘softmax’)
Then you would be able to see how the model estimates the probabilities of every sample in your data set to belong to both of the two classes.
In this case loss should be categorical crossentropy.

The binary cross entropy function is assuming there are sigmoid activations, so you should apply a sigmoid to the logits to recover the probabilities. However those probabilities are likely to be poorly calibrated, and so not actually be reflective of the true rate of a particular class (i.e. if it predicts the positive class with 90% probability that doesn’t mean that 9 times out of 10 it will be positive). If you really need high quality probability estimates I’d recommend looking at calibration, or maybe even using something like Bayesian deep learning.

That’s interesting, thanks! Just to be sure; are the following two layers identical?

  1. Dense(1, activation=“sigmoid”)
  2. Dense(2, activation=“softmax”)

To my understanding they are, might be wrong though :slight_smile:

I will look into Bayesian deep learning, as it comes up quite a lot :slight_smile: Thanks!

For binary classification task you can use any of these layers as a final layer.
Dense(1, activation=“sigmoid”) will produce one value.
Dense(2, activation=“softmax”) will produce two values, which are probabilities of belonging to two classes.
To get the respective class out of the two values produced by the model, you can use np.argmax(predicted_y).
Loss functions in these two cases should be different: in the first example - binary cross entropy, in the second example - categorical cross entropy.
If you pass targets to the model as two column matrix (one hot encoded classes), use categorical cross entropy. If you pass them as one column with 0 and 1 values, use sparse categorical cross entropy.

1 Like

The binary cross entropy and categorical cross entropy should produce equivalent loss values in the case of binary classification, and the weights in the dense layer for the 2 output case should be the negatives of each other, so in most cases it won’t matter.

This isn’t true if you’re using label smoothing, and numerical precision issues could cause them to produce slightly different outputs, but in general there is little reason to prefer one over the other.