As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
In the recent years, there has been a strong concern over the robustness of machine learning systems, specially when working in critical systems. One of such critical domains is cybersecurity, and a particular example is malware detection. This works aims to provide a formal technique to check the robustness of neural networks applied to the detection of malware. The technique is based on the automatic translation of the neural network to an equivalent set of equations that can be subsequently rigorously analyzed with respect to certain conditions for its input and output. That is, given a particular input for the neural network, check if there exist slight variations of such an input that can modify the output of the neural network. As a case study, we present preliminary results of a robustness analysis for a neural network that detects Windows PE malware. The results of the robustness analysis can be used to certify the robustness of the classifier or for improving such a classifier by fixing the flaws detected.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.