As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
We propose a novel verification method for high-dimensional feed-forward neural networks governed by ReLU, Sigmoid and Tanh activation functions. We show that the method is complete for ReLU networks and sound for other activation functions. The technique extends symbolic interval propagation by using gradient-descent to locate counter-examples from spurious solutions generated by the associated LP problems. The approach includes a novel adaptive splitting strategy intended to refine the nodes with the greatest impact on the output of the network. The resulting implementation, called VERINET, achieved speed-ups of approximately one order of magnitude for safe cases and three orders of magnitude for unsafe cases on MNIST models against SoA complete methods. VERINET could verify networks of over 50,000 ReLU nodes trained on the CIFAR-10 data-set; these are larger than networks that could previously verified via complete methods.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.