Metrics related [predictions must be <= 1] error

Hello,

I have the below error with TensorFlow 2.7. Same error happens with 2.6 but stack trace is different.

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\eager\execute.py", line 58, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError:  assertion failed: [predictions must be <= 1] [Condition x <= y did not hold element-wise:] [x (sequential_4/output/BiasAdd:0) = ] [[1.00585222][1.00123906][0.880351603]...] [y (Cast_3/x:0) = ] [1]
         [[node assert_less_equal/Assert/AssertGuard/Assert
 (defined at C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\utils\metrics_utils.py:612)
]] [Op:__inference_train_function_29922]

Errors may have originated from an input operation.
Input Source operations connected to node assert_less_equal/Assert/AssertGuard/Assert:
In[0] assert_less_equal/Assert/AssertGuard/Assert/assert_less_equal/All:
In[1] assert_less_equal/Assert/AssertGuard/Assert/data_0:
In[2] assert_less_equal/Assert/AssertGuard/Assert/data_1:
In[3] assert_less_equal/Assert/AssertGuard/Assert/data_2:
In[4] assert_less_equal/Assert/AssertGuard/Assert/sequential_4/output/BiasAdd:
In[5] assert_less_equal/Assert/AssertGuard/Assert/data_4:
In[6] assert_less_equal/Assert/AssertGuard/Assert/Cast_3/x:

Operation defined at: (most recent call last)
>>>   File "<stdin>", line 1, in <module>
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\utils\traceback_utils.py", line 64, in error_handler
>>>     return fn(*args, **kwargs)
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\engine\training.py", line 1216, in fit
>>>     tmp_logs = self.train_function(iterator)
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\engine\training.py", line 878, in train_function
>>>     return step_function(self, iterator)
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\engine\training.py", line 867, in step_function
>>>     outputs = model.distribute_strategy.run(run_step, args=(data,))
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\engine\training.py", line 860, in run_step
>>>     outputs = model.train_step(data)
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\engine\training.py", line 817, in train_step
>>>     self.compiled_metrics.update_state(y, y_pred, sample_weight)
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\engine\compile_utils.py", line 460, in update_state
>>>     metric_obj.update_state(y_t, y_p, sample_weight=mask)
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\utils\metrics_utils.py", line 73, in decorated
>>>     update_op = update_state_fn(*args, **kwargs)
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\metrics.py", line 177, in update_state_fn
>>>     return ag_update_state(*args, **kwargs)
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\metrics.py", line 1069, in update_state
>>>     return metrics_utils.update_confusion_matrix_variables(
>>>
>>>   File "C:\Users\hakan\AppData\Roaming\Python\Python39\site-packages\keras\utils\metrics_utils.py", line 612, in update_confusion_matrix_variables
>>>     tf.compat.v1.assert_less_equal(
>>>

Function call stack:
train_function -> assert_less_equal_Assert_AssertGuard_false_28872

Interesting thing is, this only happens when using BinaryCrossentropy(from_logits=True) loss and with metrics other than BinaryAccuracy, for example Precision or AUC metrics.

In other words, with BinaryCrossentropy(from_logits=False) loss it always works with any metrics, with BinaryCrossentropy(from_logits=True) loss it only works with BinaryAccuracy metrics.

Below is the model:

model = tf.keras.models.Sequential([
  tf.keras.layers.Dense(150, activation='relu', name='hidden_1', input_shape=(x_train.shape[1],)),
  tf.keras.layers.Dense(50, activation='relu', name='hidden_2'),
  tf.keras.layers.Dense(1, activation=None, name='output')
])

model.compile(optimizer=tf.keras.optimizers.Adam(),
  loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), 
  metrics=METRICS)

model.fit(x_train, y_train, epochs=20, validation_data=(x_val, y_val), callbacks=[tensorboard_callback], verbose=1)

Any clues for how to fix this?

Thanks
raft

Is the the cofusion Matrix update expect predicted values between 0 and 1:

not sure what this means. I’m not doing anything with the confusion matrix, at least not consciously

at least not consciously

That function update_confusion_matrix_variables is called internally by some metrics.

E.g. Precision

E.g. AUC

I see. Can we say than this is a bug in some metrics? Or/else how to fix this?

As you can see from examples in:
https://www.tensorflow.org/api_docs/python/tf/keras/losses/BinaryCrossentropy

When you use the non default from_logits=True you prediction are out of the 0-1 range.

See also:

Thanks @Bhack!

I guess this is my workaround solution for now:
Not every keras.metrics.* accept from_logits=True · Issue #42182 · tensorflow/tensorflow · GitHub.

I will call this a bug since BinaryCrossentropy suggests using from_logits=True.

In the issue you had posted, they state this is fixed but I guess this is not the case. I will post there.

I confirm, the mentioned workaround effectively solved my issue: