All these three seems to be very closely related and are used post model is created and trained.
- Explainable AI :- Shows why model has given the prediction the way it is. It shows you which part of image was more prominent to make a decision or which feature in text is more prominent using feature attribution
- TensorBoard :- provide a holistic view on metrics, loss, fairness, hyperparameter tuning, model performance on machine
- What-If Tool:- This is part of TensorBoard, but you can also use it outside of TensorBoard, in a Jupyter notebook as well. Here you pass the final output model as well the data to it and using both of this this tools will give you much boarder visual picture on loss measure , model fairness …etc.
Explainable AI is used get deeper understanding on model apart from your normal evaluation metrics like ROC/AUC , RMSE …etc. It will give you details about which feature in your trained model is impacting the result and to what extend . Note here you cannot modify the value dynamically add test the model again for new values
TensorBoard:- This is a visual tool, is used to get insight about your normal evaluation metrics like ROC/AUC, RMSE …etc. Apart from this you can train your model for several hyperparameter’s so using Tensorboard you can get a visual view on the performance of model for different hyperparameter’s and hence TensorBoard helps in hyperparameter tuning
What-if Tools:- As mentioned above it is part of TensorBoard. Now what this gives is that you can dynamically modify the input data and then see how the model performs so its a dynamic training that’s happening in What-if.
Conclusion: All three tools are very different from each other, they are not same though they might look same at the beginning.