As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Generally, Artificial Intelligence (AI) algorithms are unable to account for the logic of each decision they take during the course of arriving at a solution. This “black box” problem limits the usefulness of AI in military, medical, and financial security applications, among others, where the price for a mistake is great and the decision-maker must be able to monitor and understand each step along the process. In our research, we focus on the application of Explainable AI for log anomaly detection systems of a different kind. In particular, we use the Shapley value approach from cooperative game theory to explain the outcome or solution of two anomaly-detection algorithms: Decision tree and DeepLog. Both algorithms come from the machine learning-based log analysis toolkit for the automated anomaly detection “Loglizer”. The novelty of our research is that by using the Shapley value and special coding techniques we managed to evaluate or explain the contribution of both a single event and a grouped sequence of events of the Log for the purposes of anomaly detection. We explain how each event and sequence of events influences the solution, or the result, of an anomaly detection system.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.