As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
In multi-task learning (MTL), multiple prediction tasks are learned jointly, such that generalization performance is improved by transferring information across the tasks. However, not all tasks are related, and training unrelated tasks together can worsen the prediction performance because of the phenomenon of negative transfer. To overcome this problem, we propose a novel MTL method that can robustly group correlated tasks into clusters and allow useful information to be transferred only within clusters. The proposed method is based on the assumption that the task clusters lie in the low-rank subspaces of the parameter space, and the number of them and their dimensions are both unknown. By applying subspace clustering to task parameters, parameter learning and task grouping can be done in a unified framework. To relieve the error induced by the basic linear learner and robustify the model, the effect of hidden tasks is exploited. Moreover, the framework is extended to a multi-layer architecture so as to progressively extract hierarchical subspace structures of tasks, which helps to further improve generalization. The optimization algorithm is proposed, and its effectiveness is validated by experimental results on both synthetic and real-world datasets.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.