Preferential Heteroscedastic Bayesian Optimization with
Informative Noise Priors
Sinaga, Marshal, Martinelli, Julien, and Kaski, Samuel
NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World 2023
Preferential Bayesian optimization (PBO) is a sample-efficient framework for optimizing
a black-box function by utilizing human preferences between two candidate solutions as a
proxy. Conventional PBO relies on homoscedastic noise to model human preference struc-
ture. However, such noise fails to accurately capture the varying levels of human aleatoric
uncertainty among different pairs of candidates. For instance, a chemist with solid expertise
in glucose-related molecules may easily compare two compounds and struggle for alcohol-
related molecules. Furthermore, PBO ignores this uncertainty when searching for a new
candidate, consequently underestimating the risk associated with human uncertainty. To
address this, we propose heteroscedastic noise models to learn human preference structure.
Moreover, we integrate the preference structure with the acquisition functions that account
for aleatoric uncertainty. The noise models assign noise based on the distance of a specific
input to a predefined set of reliable inputs known as anchors. We empirically evaluate the
proposed approach on a range of synthetic black-box functions, demonstrating a consistent
improvement over homoscedastic PBO.