# Kernels for Vector-Valued Functions: a Review

Mauricio A. Álvarez, Universidad Tecnológica de Pereira, Colombia
Lorenzo Rosasco, University of Genoa
Neil D. Lawrence, University of Sheffield

#### Abstract

Kernel methods are among the most popular techniques in machine learning. From a frequentist/discriminative perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a Bayesian/generative perspective they are the key in the context of Gaussian processes, where the kernel function is also known as the covariance function. Traditionally, kernel methods have been used in supervised learning problem with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partly by frameworks like multitask learning. In this paper, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.

  @TechReport{alvarez-kernels11, title = {Kernels for Vector-Valued Functions: a Review}, author = {Mauricio A. Álvarez and Lorenzo Rosasco and Neil D. Lawrence}, year = {2011}, institution = {University of Sheffield}, month = {00}, edit = {https://github.com/lawrennd//publications/edit/gh-pages/_posts/2011-01-01-alvarez-kernels11.md}, url = {http://inverseprobability.com/publications/alvarez-kernels11.html}, abstract = {Kernel methods are among the most popular techniques in machine learning. From a frequentist/discriminative perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a Bayesian/generative perspective they are the key in the context of Gaussian processes, where the kernel function is also known as the covariance function. Traditionally, kernel methods have been used in supervised learning problem with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partly by frameworks like multitask learning. In this paper, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.}, key = {Alvarez:kernels11}, linkpdf = {http://arxiv.org/pdf/1106.6251v1}, linksoftware = {https://github.com/SheffieldML/multigp}, OPTgroup = {} }
 %T Kernels for Vector-Valued Functions: a Review %A Mauricio A. Álvarez and Lorenzo Rosasco and Neil D. Lawrence %B %D %F alvarez-kernels11 %P -- %R %U http://inverseprobability.com/publications/alvarez-kernels11.html %X Kernel methods are among the most popular techniques in machine learning. From a frequentist/discriminative perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a Bayesian/generative perspective they are the key in the context of Gaussian processes, where the kernel function is also known as the covariance function. Traditionally, kernel methods have been used in supervised learning problem with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partly by frameworks like multitask learning. In this paper, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods. 
 TY - CPAPER TI - Kernels for Vector-Valued Functions: a Review AU - Mauricio A. Álvarez AU - Lorenzo Rosasco AU - Neil D. Lawrence PY - 2011/01/01 DA - 2011/01/01 ID - alvarez-kernels11 SP - EP - L1 - http://arxiv.org/pdf/1106.6251v1 UR - http://inverseprobability.com/publications/alvarez-kernels11.html AB - Kernel methods are among the most popular techniques in machine learning. From a frequentist/discriminative perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a Bayesian/generative perspective they are the key in the context of Gaussian processes, where the kernel function is also known as the covariance function. Traditionally, kernel methods have been used in supervised learning problem with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partly by frameworks like multitask learning. In this paper, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods. ER - 
 Álvarez, M.A., Rosasco, L. & Lawrence, N.D.. (2011). Kernels for Vector-Valued Functions: a Review.:-