Multi-Task Learning in Collaborative Filtering Recommender Systems
Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper focuses on exploring personalized multi-task learning approaches for collaborative ﬁltering towards the goal of improving the prediction performance of rating prediction systems. These methods ﬁrst speciﬁcally identify a set of users that are closely related to the user under consideration (i.e., active user), and then learn multiple rating prediction models simultaneously, one for the active user and one for each of the related users.A fundamental challenge for collaborative filtering algorithm is data sparsity. In practice, most users do not provide ratings for most items and thus the user-item matrix is very sparse with many ratings left undefined. As a result, the accuracy of recommendation is often quite poor. To address this problem, a number of techniques have been proposed.In this paper, to tackle the data sparsity problem multiple classification problems for all users can be solved at the same time. In the machine learning literature, this approach is known as multi-task learning or transfer learning . The rationale behind transfer learning is that learning multiple classifiers together allows transforming information among them and thus improves the overall accuracy while requires less training data.