The popularity of Bayesian optimization methods for efficient exploration of parameter spaces has lead to a series of papers applying Gaussian processes as surrogates in the optimization of functions. However, most proposed approaches only allow the exploration of the parameter space to occur sequentially. Often, it is desirable to simultaneously propose batches of parameter values to explore. This is particularly the case when large parallel processing facilities are available. These could either be computational or physical facets of the process being optimized. Batch methods, however, require the modeling of the interaction between the different evaluations in the batch, which can be expensive in complex scenarios. We investigate this issue and propose a highly effective heuristic based on an estimate of the function’s Lipschitz constant that captures the most important aspect of this interaction–local repulsion–at negligible computational overhead. A penalized acquisition function is used to collect batches of points minimizing the non-parallelizable computational effort. The resulting algorithm compares very well, in run-time, with much more elaborate alternatives.

@InProceedings{gonzalez-batch16,
title = {Batch Bayesian Optimization via Local Penalization},
author = {Javier Gonzalez and Zhenwen Dai and Philipp Hennig and Neil D. Lawrence},
booktitle = {Proceedings of the Nineteenth International Workshop on Artificial Intelligence and Statistics},
pages = {648},
year = {2016},
editor = {Arthur Gretton and Cristian Robert},
volume = {51},
address = {Cadiz, Spain},
month = {00},
publisher = {JMLR W\&CP 51},
edit = {https://github.com/lawrennd//publications/edit/gh-pages/_posts/2016-01-01-gonzalez-batch16.md},
url = {http://inverseprobability.com/publications/gonzalez-batch16.html},
abstract = {The popularity of Bayesian optimization methods for efficient exploration of parameter spaces has lead to a series of papers applying Gaussian processes as surrogates in the optimization of functions. However, most proposed approaches only allow the exploration of the parameter space to occur sequentially. Often, it is desirable to simultaneously propose batches of parameter values to explore. This is particularly the case when large parallel processing facilities are available. These could either be computational or physical facets of the process being optimized. Batch methods, however, require the modeling of the interaction between the different evaluations in the batch, which can be expensive in complex scenarios. We investigate this issue and propose a highly effective heuristic based on an estimate of the function’s Lipschitz constant that captures the most important aspect of this interaction–local repulsion–at negligible computational overhead. A penalized acquisition function is used to collect batches of points minimizing the non-parallelizable computational effort. The resulting algorithm compares very well, in run-time, with much more elaborate alternatives.},
crossref = {Gretton:aistats16},
key = {Gonzalez:batch16},
linkpdf = {http://jmlr.org/proceedings/papers/v51/gonzalez16a.pdf},
OPTgroup = {}
}

%T Batch Bayesian Optimization via Local Penalization
%A Javier Gonzalez and Zhenwen Dai and Philipp Hennig and Neil D. Lawrence
%B
%C Proceedings of the Nineteenth International Workshop on Artificial Intelligence and Statistics
%D
%E Arthur Gretton and Cristian Robert
%F gonzalez-batch16
%I JMLR W\&CP 51
%P 648--657
%R
%U http://inverseprobability.com/publications/gonzalez-batch16.html
%V 51
%X The popularity of Bayesian optimization methods for efficient exploration of parameter spaces has lead to a series of papers applying Gaussian processes as surrogates in the optimization of functions. However, most proposed approaches only allow the exploration of the parameter space to occur sequentially. Often, it is desirable to simultaneously propose batches of parameter values to explore. This is particularly the case when large parallel processing facilities are available. These could either be computational or physical facets of the process being optimized. Batch methods, however, require the modeling of the interaction between the different evaluations in the batch, which can be expensive in complex scenarios. We investigate this issue and propose a highly effective heuristic based on an estimate of the function’s Lipschitz constant that captures the most important aspect of this interaction–local repulsion–at negligible computational overhead. A penalized acquisition function is used to collect batches of points minimizing the non-parallelizable computational effort. The resulting algorithm compares very well, in run-time, with much more elaborate alternatives.

TY - CPAPER
TI - Batch Bayesian Optimization via Local Penalization
AU - Javier Gonzalez
AU - Zhenwen Dai
AU - Philipp Hennig
AU - Neil D. Lawrence
BT - Proceedings of the Nineteenth International Workshop on Artificial Intelligence and Statistics
PY - 2016/01/01
DA - 2016/01/01
ED - Arthur Gretton
ED - Cristian Robert
ID - gonzalez-batch16
PB - JMLR W\&CP 51
SP - 648
EP - 657
L1 - http://jmlr.org/proceedings/papers/v51/gonzalez16a.pdf
UR - http://inverseprobability.com/publications/gonzalez-batch16.html
AB - The popularity of Bayesian optimization methods for efficient exploration of parameter spaces has lead to a series of papers applying Gaussian processes as surrogates in the optimization of functions. However, most proposed approaches only allow the exploration of the parameter space to occur sequentially. Often, it is desirable to simultaneously propose batches of parameter values to explore. This is particularly the case when large parallel processing facilities are available. These could either be computational or physical facets of the process being optimized. Batch methods, however, require the modeling of the interaction between the different evaluations in the batch, which can be expensive in complex scenarios. We investigate this issue and propose a highly effective heuristic based on an estimate of the function’s Lipschitz constant that captures the most important aspect of this interaction–local repulsion–at negligible computational overhead. A penalized acquisition function is used to collect batches of points minimizing the non-parallelizable computational effort. The resulting algorithm compares very well, in run-time, with much more elaborate alternatives.
ER -

Gonzalez, J., Dai, Z., Hennig, P. & Lawrence, N.D.. (2016). Batch Bayesian Optimization via Local Penalization. Proceedings of the Nineteenth International Workshop on Artificial Intelligence and Statistics 51:648-657