Authors
Bruno Muller, Régis Lengellé,
Title
QLTL: a Simple yet Efficient Algorithm for Semi-Supervised Transfer Learning
In
8th International Conference on Pattern Recognition Applications and Methods
Year
2019
Indexed by
Abstract
Most machine learning techniques rely on the assumption that training and target data share a similar underlying distribution. When this assumption is violated, they usually fail to generalise; this is one of the situations tackled by transfer learning: achieving good classification performances on different-butrelated datasets. In this paper, we consider the specific case where the task is unique, and where the training set(s) and the target set share a similar-but-different underlying distribution. Our method, QLTL: Quadratic Loss Transfer Learning, constitutes semi-supervised learning: we train a set of classifiers on the available training data in order to input knowledge, and we use a centred kernel polarisation criterion as a way to correct the density probability function shift between training and target data. Our method results in a convex problem, leading to an analytic solution. We show encouraging results on a toy example with covariate shift, and good performances on a textdocument classification task, relatively to recent algorithms.
Affiliations
Offprint