[edit]
Transferring Knowledge across Learning Processes
International Conference on Learning Representations, 2019.
Abstract
In complex transfer learning scenarios new tasks might not be
tightly linked to previous tasks. Approaches that transfer
information contained only in the final parameters of a source model
will therefore struggle. Instead, transfer learning at at higher
level of abstraction is needed. We propose Leap, a framework that
achieves this by transferring knowledge across learning
processes. We associate each task with a manifold on which the
training process travels from initialization to final parameters and
construct a meta-learning objective that minimizes the expected
length of this path. Our framework leverages only information
obtained during training and can be computed on the fly at
negligible cost. We demonstrate that our framework outperforms
competing methods, both in meta-learning and transfer learning, on a
set of computer vision tasks. Finally, we demonstrate that Leap can
transfer knowledge across learning processes in demanding
reinforcement learning environments (Atari) that involve millions of
gradient steps.