As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Although recent studies have shown that a Bayesian network classifier (BNC) that maximizes the classification accuracy (i.e., minimizes the 0/1 loss function) is a powerful tool in knowledge representation and classification, this classifier focuses on the majority class, is usually uninformative about the distribution of misclassifications, and is insensitive to error severity (making no distinction between misclassification types). We propose to learn a BNC using an information measure (IM) that jointly maximizes classification and information, and evaluate this measure using various databases. We show that an IM-based BNC is superior to BNCs learned using other measures, especially for ordinal classification and imbalanced problems, and does not fall behind state-of-the-art algorithms with respect to accuracy and amount of information provided.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.