This can be due to: a) complicated models, such as deep neural nets, boosted tree models or ensemble models, b) models with many variables/parameters and c) complex dependencies between the variables. Some models can be explained, but only through their global, not personalized, behavior. In this project, we use a framework from game theory called Shapley values to provide personalized explanations for the predictions made by the black box in form of variable importance scores. Shapley values are a model-agnostic explanation framework that can explain any model/method. Generating accurate Shapley values relies on precise modeling of the dependencies between the variables or accurate surrogate models. We do this by applying different machine learning methods and statistical models.
Improved Shapley Value Methodology for the Explanation of Machine Learning Models
AI, statistical models and machine learning methods can often be seen as black boxes to those who construct the model and/or to those who use or are exposed to the methods.
Illustration: Ellen Hegtun
Published July 3, 2023 4:17 PM
- Last modified Oct. 23, 2023 11:59 AM