We propose a novel Bayesian optimization (BayesOpt) algorithm for calibrating black-box derivative-free expensive-to-evaluate computer models. Our approach finds model parameters x that minimize f(x)=g(h(x)), where h(x) is the model’s vector-valued prediction and g(h(x)) is the sum of squared errors. Standard BayesOpt models f directly. By modeling h instead and leveraging knowledge of g, our approach outperforms standard BayesOpt by several orders magnitude on test problems.