Learning to Learn with the Informative Vector Machine

Neil D. Lawrence, John C. Platt
,  21:512-519, 2004.

Abstract

This paper describes an efficient method for learning the parameters of a Gaussian process (GP). The parameters are learned from multiple tasks which are assumed to have been drawn independently from the same GP prior. An efficient algorithm is obtained by extending the informative vector machine (IVM) algorithm to handle the multi-task learning case. The multi-task IVM (MT-IVM) saves computation by greedily selecting the most informative examples from the separate tasks. The MT-IVM is also shown to be more efficient than sub-sampling on an artificial data-set and more effective than the traditional IVM in a speaker dependent phoneme recognition task.

Cite this Paper


BibTeX
@InProceedings{pmlr-v-lawrence-learning04, title = {Learning to Learn with the Informative Vector Machine}, author = {Neil D. Lawrence and John C. Platt}, pages = {512--519}, year = {}, editor = {}, volume = {21}, url = {http://inverseprobability.com/publications/lawrence-learning04.html}, abstract = {This paper describes an efficient method for learning the parameters of a Gaussian process (GP). The parameters are learned from multiple tasks which are assumed to have been drawn independently from the same GP prior. An efficient algorithm is obtained by extending the informative vector machine (IVM) algorithm to handle the multi-task learning case. The multi-task IVM (MT-IVM) saves computation by greedily selecting the most informative examples from the separate tasks. The MT-IVM is also shown to be more efficient than sub-sampling on an artificial data-set and more effective than the traditional IVM in a speaker dependent phoneme recognition task.} }
Endnote
%0 Conference Paper %T Learning to Learn with the Informative Vector Machine %A Neil D. Lawrence %A John C. Platt %B %C Proceedings of Machine Learning Research %D %E %F pmlr-v-lawrence-learning04 %I PMLR %J Proceedings of Machine Learning Research %P 512--519 %R 10.1145/1015330.1015382 %U http://inverseprobability.com %V %W PMLR %X This paper describes an efficient method for learning the parameters of a Gaussian process (GP). The parameters are learned from multiple tasks which are assumed to have been drawn independently from the same GP prior. An efficient algorithm is obtained by extending the informative vector machine (IVM) algorithm to handle the multi-task learning case. The multi-task IVM (MT-IVM) saves computation by greedily selecting the most informative examples from the separate tasks. The MT-IVM is also shown to be more efficient than sub-sampling on an artificial data-set and more effective than the traditional IVM in a speaker dependent phoneme recognition task.
RIS
TY - CPAPER TI - Learning to Learn with the Informative Vector Machine AU - Neil D. Lawrence AU - John C. Platt BT - PY - DA - ED - ID - pmlr-v-lawrence-learning04 PB - PMLR SP - 512 DP - PMLR EP - 519 DO - 10.1145/1015330.1015382 L1 - UR - http://inverseprobability.com/publications/lawrence-learning04.html AB - This paper describes an efficient method for learning the parameters of a Gaussian process (GP). The parameters are learned from multiple tasks which are assumed to have been drawn independently from the same GP prior. An efficient algorithm is obtained by extending the informative vector machine (IVM) algorithm to handle the multi-task learning case. The multi-task IVM (MT-IVM) saves computation by greedily selecting the most informative examples from the separate tasks. The MT-IVM is also shown to be more efficient than sub-sampling on an artificial data-set and more effective than the traditional IVM in a speaker dependent phoneme recognition task. ER -
APA
Lawrence, N.D. & Platt, J.C.. (). Learning to Learn with the Informative Vector Machine. , in PMLR :512-519

Related Material