Structured Variationally Auto-encoded Optimization

Xiaoyu Lu, Javier Gonzalez, Zhenwen Dai, Neil D. Lawrence
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3273-3281, 2018.

Abstract

We tackle the problem of optimizing a black-box objective function defined over a highly-structured input space. This problem is ubiquitous in science and engineering. In machine learning, inferring the structure of a neural network or the Automatic Statistician (AS), where the optimal kernel combination for a Gaussian process is selected, are two important examples. We use the \as as a case study to describe our approach, that can be easily generalized to other domains. We propose an Structure Generating Variational Auto-encoder (SG-VAE) to embed the original space of kernel combinations into some low-dimensional continuous manifold where Bayesian optimization (BO) ideas are used. This is possible when structural knowledge of the problem is available, which can be given via a simulator or any other form of generating potentially good solutions. The right exploration-exploitation balance is imposed by propagating into the search the uncertainty of the latent space of the SG-VAE, that is computed using variational inference. The key aspect of our approach is that the SG-VAE can be used to bias the search towards relevant regions, making it suitable for transfer learning tasks. Several experiments in various application domains are used to illustrate the utility and generality of the approach described in this work.

Cite this Paper


BibTeX
@InProceedings{lu18c, title = {Structured Variationally Auto-encoded Optimization}, author = {Lu, Xiaoyu and Gonzalez, Javier and Dai, Zhenwen and Lawrence, Neil D.}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3273--3281}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/lu18c/lu18c.pdf}, url = {http://inverseprobability.com/publications/lu18c.html}, abstract = {We tackle the problem of optimizing a black-box objective function defined over a highly-structured input space. This problem is ubiquitous in science and engineering. In machine learning, inferring the structure of a neural network or the Automatic Statistician (AS), where the optimal kernel combination for a Gaussian process is selected, are two important examples. We use the \as as a case study to describe our approach, that can be easily generalized to other domains. We propose an Structure Generating Variational Auto-encoder (SG-VAE) to embed the original space of kernel combinations into some low-dimensional continuous manifold where Bayesian optimization (BO) ideas are used. This is possible when structural knowledge of the problem is available, which can be given via a simulator or any other form of generating potentially good solutions. The right exploration-exploitation balance is imposed by propagating into the search the uncertainty of the latent space of the SG-VAE, that is computed using variational inference. The key aspect of our approach is that the SG-VAE can be used to bias the search towards relevant regions, making it suitable for transfer learning tasks. Several experiments in various application domains are used to illustrate the utility and generality of the approach described in this work. } }
Endnote
%0 Conference Paper %T Structured Variationally Auto-encoded Optimization %A Xiaoyu Lu %A Javier Gonzalez %A Zhenwen Dai %A Neil D. Lawrence %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F lu18c %I PMLR %P 3273--3281 %U http://inverseprobability.com/publications/lu18c.html %V 80 %X We tackle the problem of optimizing a black-box objective function defined over a highly-structured input space. This problem is ubiquitous in science and engineering. In machine learning, inferring the structure of a neural network or the Automatic Statistician (AS), where the optimal kernel combination for a Gaussian process is selected, are two important examples. We use the \as as a case study to describe our approach, that can be easily generalized to other domains. We propose an Structure Generating Variational Auto-encoder (SG-VAE) to embed the original space of kernel combinations into some low-dimensional continuous manifold where Bayesian optimization (BO) ideas are used. This is possible when structural knowledge of the problem is available, which can be given via a simulator or any other form of generating potentially good solutions. The right exploration-exploitation balance is imposed by propagating into the search the uncertainty of the latent space of the SG-VAE, that is computed using variational inference. The key aspect of our approach is that the SG-VAE can be used to bias the search towards relevant regions, making it suitable for transfer learning tasks. Several experiments in various application domains are used to illustrate the utility and generality of the approach described in this work.
RIS
TY - CPAPER TI - Structured Variationally Auto-encoded Optimization AU - Xiaoyu Lu AU - Javier Gonzalez AU - Zhenwen Dai AU - Neil D. Lawrence BT - Proceedings of the 35th International Conference on Machine Learning DA - 2018/07/03 ED - Jennifer Dy ED - Andreas Krause ID - lu18c PB - PMLR DP - Proceedings of Machine Learning Research VL - 80 SP - 3273 EP - 3281 L1 - http://proceedings.mlr.press/v80/lu18c/lu18c.pdf UR - http://inverseprobability.com/publications/lu18c.html AB - We tackle the problem of optimizing a black-box objective function defined over a highly-structured input space. This problem is ubiquitous in science and engineering. In machine learning, inferring the structure of a neural network or the Automatic Statistician (AS), where the optimal kernel combination for a Gaussian process is selected, are two important examples. We use the \as as a case study to describe our approach, that can be easily generalized to other domains. We propose an Structure Generating Variational Auto-encoder (SG-VAE) to embed the original space of kernel combinations into some low-dimensional continuous manifold where Bayesian optimization (BO) ideas are used. This is possible when structural knowledge of the problem is available, which can be given via a simulator or any other form of generating potentially good solutions. The right exploration-exploitation balance is imposed by propagating into the search the uncertainty of the latent space of the SG-VAE, that is computed using variational inference. The key aspect of our approach is that the SG-VAE can be used to bias the search towards relevant regions, making it suitable for transfer learning tasks. Several experiments in various application domains are used to illustrate the utility and generality of the approach described in this work. ER -
APA
Lu, X., Gonzalez, J., Dai, Z. & Lawrence, N.D.. (2018). Structured Variationally Auto-encoded Optimization. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3273-3281 Available from http://inverseprobability.com/publications/lu18c.html.

Related Material