We consider Bayesian optimization (BO) of objective functions whose evaluations require evaluating a series of expensive-to-evaluate functions arranged in a network so that each function takes as input the output of its parent nodes. While these problems can be approached using standard techniques, we propose a novel approach that leverages their structure to improve sampling efficiency. Our approach models the individual nodes of the network using independent Gaussian processes and chooses where to sample using as acquisition function the expected improvement evaluated on the implied posterior on the objective function. Although this acquisition function cannot be computed in closed form, we provide an efficient sample average approximation approach to maximize it. Numerical experiments show that our approach dramatically outperforms standard BO benchmarks.