Preferential Bayesian Optimization

Javier González, Zhenwen Dai, Andreas Damianou, Neil D. Lawrence
; Proceedings of the 34th International Conference on Machine Learning,  70:1282-1291, 2017.

Abstract

Bayesian optimization (BO) has emerged during the last few years as an effective approach to optimize black-box functions where direct queries of the objective are expensive. We consider the case where direct access to the function is not possible, but information about user preferences is. Such scenarios arise in problems where human preferences are modeled, such as A/B tests or recommender systems. We present a new framework for this scenario that we call Preferential Bayesian Optimization (PBO) and that allows to find the optimum of a latent function that can only be queried through pairwise comparisons, so-called duels. PBO extend the applicability of standard BO ideas and generalizes previous discrete dueling approaches by modeling the probability of the the winner of each duel by means of Gaussian process model with a Bernoulli likelihood. The latent preference function is used to define a family of acquisition functions that extend usual policies used in BO. We illustrate the benefits of PBO in a variety of experiments in which we show how the way correlations are modeled is the key ingredient to drastically reduce the number of comparisons to find the optimum of the latent function of interest.

Cite this Paper


BibTeX
@InProceedings{pmlr-v-gonzalez17a, title = {Preferential {B}ayesian Optimization}, author = {Javier Gonz{\'a}lez and Zhenwen Dai and Andreas Damianou and Neil D. Lawrence}, pages = {1282--1291}, year = {}, editor = {}, volume = {70}, pdf = {http://proceedings.mlr.press/v70/gonzalez17a/gonzalez17a.pdf}, url = {http://inverseprobability.com/publications/gonzalez17a.html}, abstract = {Bayesian optimization (BO) has emerged during the last few years as an effective approach to optimize black-box functions where direct queries of the objective are expensive. We consider the case where direct access to the function is not possible, but information about user preferences is. Such scenarios arise in problems where human preferences are modeled, such as A/B tests or recommender systems. We present a new framework for this scenario that we call Preferential Bayesian Optimization (PBO) and that allows to find the optimum of a latent function that can only be queried through pairwise comparisons, so-called duels. PBO extend the applicability of standard BO ideas and generalizes previous discrete dueling approaches by modeling the probability of the the winner of each duel by means of Gaussian process model with a Bernoulli likelihood. The latent preference function is used to define a family of acquisition functions that extend usual policies used in BO. We illustrate the benefits of PBO in a variety of experiments in which we show how the way correlations are modeled is the key ingredient to drastically reduce the number of comparisons to find the optimum of the latent function of interest.} }
Endnote
%0 Conference Paper %T Preferential Bayesian Optimization %A Javier González %A Zhenwen Dai %A Andreas Damianou %A Neil D. Lawrence %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D %E %F pmlr-v-gonzalez17a %I PMLR %J Proceedings of Machine Learning Research %P 1282--1291 %U http://inverseprobability.com %V %W PMLR %X Bayesian optimization (BO) has emerged during the last few years as an effective approach to optimize black-box functions where direct queries of the objective are expensive. We consider the case where direct access to the function is not possible, but information about user preferences is. Such scenarios arise in problems where human preferences are modeled, such as A/B tests or recommender systems. We present a new framework for this scenario that we call Preferential Bayesian Optimization (PBO) and that allows to find the optimum of a latent function that can only be queried through pairwise comparisons, so-called duels. PBO extend the applicability of standard BO ideas and generalizes previous discrete dueling approaches by modeling the probability of the the winner of each duel by means of Gaussian process model with a Bernoulli likelihood. The latent preference function is used to define a family of acquisition functions that extend usual policies used in BO. We illustrate the benefits of PBO in a variety of experiments in which we show how the way correlations are modeled is the key ingredient to drastically reduce the number of comparisons to find the optimum of the latent function of interest.
RIS
TY - CPAPER TI - Preferential Bayesian Optimization AU - Javier González AU - Zhenwen Dai AU - Andreas Damianou AU - Neil D. Lawrence BT - Proceedings of the 34th International Conference on Machine Learning PY - DA - ED - ID - pmlr-v-gonzalez17a PB - PMLR SP - 1282 DP - PMLR EP - 1291 L1 - http://proceedings.mlr.press/v70/gonzalez17a/gonzalez17a.pdf UR - http://inverseprobability.com/publications/gonzalez17a.html AB - Bayesian optimization (BO) has emerged during the last few years as an effective approach to optimize black-box functions where direct queries of the objective are expensive. We consider the case where direct access to the function is not possible, but information about user preferences is. Such scenarios arise in problems where human preferences are modeled, such as A/B tests or recommender systems. We present a new framework for this scenario that we call Preferential Bayesian Optimization (PBO) and that allows to find the optimum of a latent function that can only be queried through pairwise comparisons, so-called duels. PBO extend the applicability of standard BO ideas and generalizes previous discrete dueling approaches by modeling the probability of the the winner of each duel by means of Gaussian process model with a Bernoulli likelihood. The latent preference function is used to define a family of acquisition functions that extend usual policies used in BO. We illustrate the benefits of PBO in a variety of experiments in which we show how the way correlations are modeled is the key ingredient to drastically reduce the number of comparisons to find the optimum of the latent function of interest. ER -
APA
González, J., Dai, Z., Damianou, A. & Lawrence, N.D.. (). Preferential Bayesian Optimization. Proceedings of the 34th International Conference on Machine Learning, in PMLR :1282-1291

Related Material