As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
To find a way of the interpretability of deep learning, in this paper, a features back-tracking (FBT) approach based on a sparse deep learning architecture is proposed. Firstly, for a deep belief network (DBN), both the Kullback-Leibler divergence of the hidden neurons and the L1 norm penalty on the connection weights are introduced. Thus, the sparse response mechanism as well as the sparse connection of the brain neurons can be simulated directly. That means the DBN can learn a sparse framework and an effective sparse data representation. On this basis, the feature back-tracking technique is put forward. For both the single nucleotide polymorphisms (SNPs) data and MNIST data, FBT has quite well performance on searching for the risk loci on the genes as well as the important sites of the digit data. It reveals that the proposed FBT method can pick out the essential features by deep learning architecture with quite high classification accuracy and data storage ability. Utilizing the sparse layer-wise feature learning to achieve key features from the original data, is an effective attempt to explore the profound mechanism of human brain and interpretability of deep learning.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.