

Understanding the rationale behind the predictions made by machine learning models holds paramount importance across numerous applications. Various explainable models have been developed to shed light on these predictions by assessing the individual contributions of features to the outcome of black-box models. However, existing methods often overlook the crucial aspect of interactions among features, restricting the explanation to isolated feature attributions. In this paper, we introduce a novel Choquet integral-based explainable method, termed ChoquEx, which not only considers the interactions among features but also enables the computation of contributions for any subset of features. To achieve this, we propose an innovative algorithm based on support vector regression that efficiently estimates the contributions of all feature subsets. Intriguingly, we leverage game-theoretic concepts, including Shapley values and interaction index, to calculate both the feature importance and interaction strength. This approach adds further interpretability and insight into the model’s decision-making process. To evaluate the effectiveness of ChoquEx, we conduct extensive experiments on diverse real-world scenarios. Our results convincingly demonstrate the superiority of the proposed model over existing explainable techniques. With its ability to unravel feature interactions and furnish comprehensive explanations, ChoquEx significantly enhances our understanding of predictive models, opening new avenues for applying machine learning in critical domains that require transparent decision-making.