As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
In real-life machine learning applications, there are often costs associated with the features needed in prediction. This is the case for example when deploying learned models in mass produced products, where the manufacturing costs or space limitations may restrict the number of feature extracting sensors that can be included in each device. In such situations, the training process involves a sparsity budget restricting the number of features the learned predictor can use. In this paper, we consider the problem of learning multi-label predictors under a sparsity budget. For this purpose, we consider three different wrapper-based greedy forward selection approaches for constructing sparse multi-label learning models. In our experiments, we show that the method selecting a common set of features shared by multiple tasks by greedily maximizing the prediction performance averaged over all the tasks provides a better prediction performance than the approaches selecting the features separately for each task.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.