Bayesian Optimization of Composite Functions

Abstract

We consider optimization of composite objective functions, i.e., of the form f(x)=g(h(x)), where h is a black-box derivative-free expensive-to-evaluate function with vector-valued outputs, and g is a cheap-to-evaluate function taking vector-valued inputs. While these problems can be solved with standard Bayesian optimization, we propose a novel approach that exploits the composite structure of the objective function to substantially improve sampling efficiency. Our approach models h using a multi-output Gaussian process and chooses where to sample using a natural generalization of the expected improvement acquisition function, called Expected Improvement for Composite Functions (EI-CF). Although EI-CF cannot be computed in closed form, we provide a novel stochastic gradient estimator that allows its efficient maximization. We then show that our approach is asymptotically consistent, i.e., that it recovers a globally optimal solution as sampling effort grows to infinity, generalizing previous convergence results for classical EI. Numerical experiments show our approach dramatically outperforms standard Bayesian optimization benchmarks, achieving simple regret that is smaller by several orders of magnitude.

Date
Jun 12, 2019 5:05 PM — 5:10 PM
Location
Room 101, Long Beach Convention & Entertainment Center
300 E Ocean Blvd, Long Beach, CA, 94305, United States