In this post, we will get the idea of Shapley value, try to understand why the order of features matter, how to move from Shapley value to SHAP, the story of Observational and Interventional Conditional Distribution when filling absent features, should we use train set or test set for explaining the model and so forth.

Understanding SHAP for Interpretable Machine Learning