Supervised Dictionary Learning

David Bradley 07 Jan

NIPS overview:
1. Supervised Dictionary Learning
Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro, Andrew Zisserman
It is now well established that sparse signal models are well suited to restoration tasks and can effectively be learned from audio, image, and video data. Recent research has been aimed at learning discriminative sparse models instead of purely reconstructive ones. This paper proposes a new step in that direction, with a novel sparse representation for signals belonging to different classes in terms of a shared dictionary and multiple discriminative class models. The linear variant of the proposed model admits a simple probabilistic interpretation, while its most general variant admits an interpretation in terms of kernels. An optimization framework for learning all the components of the proposed model is presented, along with experimental results on standard handwritten digit and texture classification tasks.

2. Transfer Learning by Distribution Matching for Targeted Advertising
Steffen Bickel, Christoph Sawade, and Tobias Scheffer
We address the problem of learning classifiers for several related tasks that may differ in their joint distribution of input and output variables. For each task, small ¬®Cpossibly even empty ¬®C labeled samples and large unlabeled samples are available. While the unlabeled samples reflect the target distribution, the labeled samples may be biased. This setting is motivated by the problem of predicting sociodemographic features for users of web portals, based on the content which they have accessed. Here, questionnaires offered to a portion of each portal’s users produce biased samples. We derive a transfer learning procedure that produces resampling weights which match the pool of all examples to the target distribution of any given task. Transfer learning enables us to make predictions even for new portals with few or no training data and improves the overall prediction accuracy.

I will also briefly discuss a purely theoretical learning paper:
Mind the Duality Gap: Logarithmic regret algorithms for online optimization
Sham M. Kakade, and Shai Shalev-Schwartz
We describe a primal-dual framework for the design and analysis of online strongly convex optimization algorithms. Our framework yields the tightest known logarithmic regret bounds for Follow-The-Leader and for the gradient descent algorithm proposed in Hazan et al. [2006]. We then show that one can interpolate between these two extreme cases. In particular, we derive a new algorithm that shares the computational simplicity of gradient descent but achieves lower regret in many practical situations. Finally, we further extend our framework for generalized strongly convex functions.