As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Deep learning provides a variety of neural network based models, known as Deep Neural Networks (DNNs), which are being successfully used in several domains to build highly accurate predictors from data. In particular, the predictive performance of a dense fully-connected multi-layer neural networks may vary depending on some factors. In this paper, 18 synthetic datasets were used to test the effect of data dimension and data structure on the predictive performance of a standard DNN and an architecture-constrained DNN (c-DNN) based on problem-specific information. The results of the analysis showed that a c-DNN clearly outperforms a standard DNN in most of the cases considered. Moreover, it suggested that both adding constraints to the network architecture and having the lowest number of input features possible which are relevant to the problem addressed may have a positive impact in terms of reducing overfitting and getting better prediction results.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.