[edit]

# Kernels for Vector-Valued Functions: A Review

Mauricio A. Álvarez, Lorenzo Rosasco, Neil D. Lawrence, 4(3):195-266, 2012.

#### Abstract

Kernel methods are among the most popular techniques in machine learning. From a regularization perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a probabilistic perspec- tive they are the key in the context of Gaussian processes, where the kernel function is known as the covariance function. Traditionally, kernel methods have been used in supervised learning problems with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partially by frameworks like multitask learning. In this monograph, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.

#### Cite this Paper

BibTeX

```
@InProceedings{pmlr-v-alvarez-vector12,
title = {Kernels for Vector-Valued Functions: A Review},
author = {Mauricio A. Álvarez and Lorenzo Rosasco and Neil D. Lawrence},
pages = {195--266},
year = {},
editor = {},
volume = {4},
number = {3},
url = {http://inverseprobability.com/publications/alvarez-vector12.html},
abstract = {Kernel methods are among the most popular techniques in machine learning. From a regularization perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a probabilistic perspec- tive they are the key in the context of Gaussian processes, where the kernel function is known as the covariance function. Traditionally, kernel methods have been used in supervised learning problems with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partially by frameworks like multitask learning. In this monograph, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.}
}
```

Endnote

```
%0 Conference Paper
%T Kernels for Vector-Valued Functions: A Review
%A Mauricio A. Álvarez
%A Lorenzo Rosasco
%A Neil D. Lawrence
%B
%C Proceedings of Machine Learning Research
%D
%E
%F pmlr-v-alvarez-vector12
%I PMLR
%J Proceedings of Machine Learning Research
%P 195--266
%R 10.1561/2200000036
%U http://inverseprobability.com
%V
%N 3
%W PMLR
%X Kernel methods are among the most popular techniques in machine learning. From a regularization perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a probabilistic perspec- tive they are the key in the context of Gaussian processes, where the kernel function is known as the covariance function. Traditionally, kernel methods have been used in supervised learning problems with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partially by frameworks like multitask learning. In this monograph, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.
```

RIS

```
TY - CPAPER
TI - Kernels for Vector-Valued Functions: A Review
AU - Mauricio A. Álvarez
AU - Lorenzo Rosasco
AU - Neil D. Lawrence
BT -
PY -
DA -
ED -
ID - pmlr-v-alvarez-vector12
PB - PMLR
SP - 195
DP - PMLR
EP - 266
DO - 10.1561/2200000036
L1 -
UR - http://inverseprobability.com/publications/alvarez-vector12.html
AB - Kernel methods are among the most popular techniques in machine learning. From a regularization perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a probabilistic perspec- tive they are the key in the context of Gaussian processes, where the kernel function is known as the covariance function. Traditionally, kernel methods have been used in supervised learning problems with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partially by frameworks like multitask learning. In this monograph, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.
ER -
```

APA

`Álvarez, M.A., Rosasco, L. & Lawrence, N.D.. (). Kernels for Vector-Valued Functions: A Review. `*, in PMLR* (3):195-266

#### Related Material

BibTeX

```
@InProceedings{/alvarez-vector12,
title = {Kernels for Vector-Valued Functions: A Review},
author = {Mauricio A. Álvarez and Lorenzo Rosasco and Neil D. Lawrence},
pages = {195--266},
year = {},
editor = {},
volume = {4},
number = {3},
url = {http://inverseprobability.com/publications/alvarez-vector12.html},
abstract = {Kernel methods are among the most popular techniques in machine learning. From a regularization perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a probabilistic perspec- tive they are the key in the context of Gaussian processes, where the kernel function is known as the covariance function. Traditionally, kernel methods have been used in supervised learning problems with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partially by frameworks like multitask learning. In this monograph, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.}
}
```

Endnote

```
%0 Conference Paper
%T Kernels for Vector-Valued Functions: A Review
%A Mauricio A. Álvarez
%A Lorenzo Rosasco
%A Neil D. Lawrence
%B
%C Proceedings of Machine Learning Research
%D
%E
%F /alvarez-vector12
%I PMLR
%J Proceedings of Machine Learning Research
%P 195--266
%R 10.1561/2200000036
%U http://inverseprobability.com
%V
%N 3
%W PMLR
%X Kernel methods are among the most popular techniques in machine learning. From a regularization perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a probabilistic perspec- tive they are the key in the context of Gaussian processes, where the kernel function is known as the covariance function. Traditionally, kernel methods have been used in supervised learning problems with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partially by frameworks like multitask learning. In this monograph, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.
```

RIS

```
TY - CPAPER
TI - Kernels for Vector-Valued Functions: A Review
AU - Mauricio A. Álvarez
AU - Lorenzo Rosasco
AU - Neil D. Lawrence
BT -
PY -
DA -
ED -
ID - /alvarez-vector12
PB - PMLR
SP - 195
DP - PMLR
EP - 266
DO - 10.1561/2200000036
L1 -
UR - http://inverseprobability.com/publications/alvarez-vector12.html
AB - Kernel methods are among the most popular techniques in machine learning. From a regularization perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a probabilistic perspec- tive they are the key in the context of Gaussian processes, where the kernel function is known as the covariance function. Traditionally, kernel methods have been used in supervised learning problems with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partially by frameworks like multitask learning. In this monograph, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.
ER -
```

APA

`Álvarez, M.A., Rosasco, L. & Lawrence, N.D.. (). Kernels for Vector-Valued Functions: A Review. `*, in PMLR* (3):195-266