Explaining "Blackbox" ML Models - Practical Application of SHAP
GBM models have been battle-tested as powerful models but have been tainted by the lack explainability. Typically data scientists look at variable importance plots but they are not enough to explain how a model works. To maximize adoption by the model user, use SHAP values to answer common explainability questions and build trust in your models.
In this post, we will train a GBM model on a simple dataset and you will learn how to explain how the model works. The goal here is not to explain how the math works, but to explain to a non-technical user how the input variables are related to the output variable and how predictions are made.

|