Robustness metrics

Hello,

Is there any metric provided to assess robustness / stability of a ML module? As far as I understand the ones provided are more related with the functionality.
I understand reliability as the ability to perform the intended function in the presence of abnormal or unknown inputs, related to ‘reliability’.

Any comment is welcome
Thanks in advance

Hi @Teresa_IO,

Traditional metrics for machine learning models typically focus on performance-related aspects such as accuracy, precision, recall, and F1 score.
The specific metrics that are most relevant will depend on the specific application of the ML module.

As per my experience, the below metrics can be helpful to find the robustness and stability of an ML module:

Adversarial Testing,Sensitivity Analysis,Cross-Validation,Monte Carlo Dropout,Confidence Intervals(between predictions and actual values),Out-of-Distribution Detection, Lipschitz continuity(which measures how much the output of a model changes with respect to small changes in the input),Wasserstein distance(which quantifies the similarity between the distribution of predictions made by a model on clean inputs and adversarial examples),Time Stability,Stress Testing.

I hope this helps!

Thanks.

thanks very much, yes it helps!