

Multi-View Multi-Task Learning (MVMTL) aims to make predictions on dual-heterogeneous data. Such data contains features from multiple views, and multiple tasks in the data are related with each other through common views. Existing MVMTL methods usually face two major challenges: 1) to save the predictive information from full-order interactions between views efficiently. 2) to learn a parsimonious and highly interpretable model such that the target is related to the features through a subset of interactions. To deal with the challenges, we propose a novel MVMTL method based on multiplicative sparse tensor factorization. For 1), we represent full-order interactions between views as a tensor, that enables to capture the complex correlations in dual-heterogeneous data by a concise model. For 2), we decompose the interaction tensor into a product of two components: one being shared with all tasks and the other being specific to individual tasks. Moreover, tensor factorization is applied to control the model complexity and learn a consensus latent representation shared by multiple tasks. Theoretical analysis reveals the equivalence between our method and a family of models with a joint but more general form of regularizers. Experiments on both synthetic and real-world datasets prove its effectiveness.