botorch#

BoTorch acquisition functions supported in obsidian

class obsidian.acquisition.botorch.qLogExpectedHypervolumeImprovement(model: Model, ref_point: List[float] | Tensor, partitioning: NondominatedPartitioning, sampler: MCSampler | None = None, objective: MCMultiOutputObjective | None = None, constraints: List[Callable[[Tensor], Tensor]] | None = None, X_pending: Tensor | None = None, eta: Tensor | float | None = 0.01, fat: bool = True, tau_relu: float = 1e-06, tau_max: float = 0.01)[source]#

Bases: MultiObjectiveMCAcquisitionFunction, SubsetIndexCachingMixin

forward(X: Tensor) Tensor[source]#

Takes in a batch_shape x q x d X Tensor of t-batches with q d-dim design points each, and returns a Tensor with shape batch_shape’, where batch_shape’ is the broadcasted batch shape of model and input X. Should utilize the result of set_X_pending as needed to account for pending function evaluations.

class obsidian.acquisition.botorch.qLogExpectedImprovement(model: Model, best_f: float | Tensor, sampler: MCSampler | None = None, objective: MCAcquisitionObjective | None = None, posterior_transform: PosteriorTransform | None = None, X_pending: Tensor | None = None, constraints: List[Callable[[Tensor], Tensor]] | None = None, eta: Tensor | float = 0.001, fat: bool = True, tau_max: float = 0.01, tau_relu: float = 1e-06)[source]#

Bases: LogImprovementMCAcquisitionFunction

MC-based batch Log Expected Improvement.

This computes qLogEI by (1) sampling the joint posterior over q points, (2) evaluating the smoothed log improvement over the current best for each sample, (3) smoothly maximizing over q, and (4) averaging over the samples in log space.

See [Ament2023logei] for details. Formally,

qLogEI(X) ~ log(qEI(X)) = log(E(max(max Y - best_f, 0))).

where Y ~ f(X), and X = (x_1,…,x_q), .

Example

>>> model = SingleTaskGP(train_X, train_Y)
>>> best_f = train_Y.max()[0]
>>> sampler = SobolQMCNormalSampler(1024)
>>> qLogEI = qLogExpectedImprovement(model, best_f, sampler)
>>> qei = qLogEI(test_X)
class obsidian.acquisition.botorch.qLogNParEGO(model: Model, X_baseline: Tensor, scalarization_weights: Tensor | None = None, sampler: MCSampler | None = None, objective: MCMultiOutputObjective | None = None, constraints: List[Callable[[Tensor], Tensor]] | None = None, X_pending: Tensor | None = None, eta: Tensor | float = 0.001, fat: bool = True, prune_baseline: bool = False, cache_root: bool = True, tau_relu: float = 1e-06, tau_max: float = 0.01)[source]#

Bases: qLogNoisyExpectedImprovement, MultiObjectiveMCAcquisitionFunction

class obsidian.acquisition.botorch.qLogNoisyExpectedHypervolumeImprovement(model: Model, ref_point: List[float] | Tensor, X_baseline: Tensor, sampler: MCSampler | None = None, objective: MCMultiOutputObjective | None = None, constraints: List[Callable[[Tensor], Tensor]] | None = None, X_pending: Tensor | None = None, eta: Tensor | float | None = 0.001, prune_baseline: bool = False, alpha: float = 0.0, cache_pending: bool = True, max_iep: int = 0, incremental_nehvi: bool = True, cache_root: bool = True, tau_relu: float = 1e-06, tau_max: float = 0.001, fat: bool = True, marginalize_dim: int | None = None)[source]#

Bases: NoisyExpectedHypervolumeMixin, qLogExpectedHypervolumeImprovement

forward(X: Tensor) Tensor[source]#

Takes in a batch_shape x q x d X Tensor of t-batches with q d-dim design points each, and returns a Tensor with shape batch_shape’, where batch_shape’ is the broadcasted batch shape of model and input X. Should utilize the result of set_X_pending as needed to account for pending function evaluations.

class obsidian.acquisition.botorch.qLogNoisyExpectedImprovement(model: Model, X_baseline: Tensor, sampler: MCSampler | None = None, objective: MCAcquisitionObjective | None = None, posterior_transform: PosteriorTransform | None = None, X_pending: Tensor | None = None, constraints: List[Callable[[Tensor], Tensor]] | None = None, eta: Tensor | float = 0.001, fat: bool = True, prune_baseline: bool = False, cache_root: bool = True, tau_max: float = 0.01, tau_relu: float = 1e-06, marginalize_dim: int | None = None)[source]#

Bases: LogImprovementMCAcquisitionFunction, CachedCholeskyMCSamplerMixin

MC-based batch Log Noisy Expected Improvement.

This function does not assume a best_f is known (which would require noiseless observations). Instead, it uses samples from the joint posterior over the q test points and previously observed points. A smooth approximation to the canonical improvement over previously observed points is computed for each sample and the logarithm of the average is returned.

See [Ament2023logei] for details. Formally,

qLogNEI(X) ~ log(qNEI(X)) = Log E(max(max Y - max Y_baseline, 0)),

where (Y, Y_baseline) ~ f((X, X_baseline)), X = (x_1,…,x_q).

Example

>>> model = SingleTaskGP(train_X, train_Y)
>>> sampler = SobolQMCNormalSampler(1024)
>>> qLogNEI = qLogNoisyExpectedImprovement(model, train_X, sampler)
>>> acqval = qLogNEI(test_X)
compute_best_f(obj: Tensor) Tensor[source]#

Computes the best (feasible) noisy objective value.

Parameters:

objsample_shape x batch_shape x q-dim Tensor of objectives in forward.

Returns:

A sample_shape x batch_shape-dim Tensor of best feasible objectives.

class obsidian.acquisition.botorch.qNegIntegratedPosteriorVariance(model: Model, mc_points: Tensor, sampler: MCSampler | None = None, posterior_transform: PosteriorTransform | None = None, X_pending: Tensor | None = None)[source]#

Bases: AcquisitionFunction

Batch Integrated Negative Posterior Variance for Active Learning.

This acquisition function quantifies the (negative) integrated posterior variance (excluding observation noise, computed using MC integration) of the model. In that, it is a proxy for global model uncertainty, and thus purely focused on “exploration”, rather the “exploitation” of many of the classic Bayesian Optimization acquisition functions.

See [Seo2014activedata], [Chen2014seqexpdesign], and [Binois2017repexp].

forward(X: Tensor) Tensor[source]#

Evaluate the acquisition function on the candidate set X.

Parameters:

X – A (b) x q x d-dim Tensor of (b) t-batches with q d-dim design points each.

Returns:

A (b)-dim Tensor of acquisition function values at the given design points X.

class obsidian.acquisition.botorch.qProbabilityOfImprovement(model: Model, best_f: float | Tensor, sampler: MCSampler | None = None, objective: MCAcquisitionObjective | None = None, posterior_transform: PosteriorTransform | None = None, X_pending: Tensor | None = None, tau: float = 0.001, constraints: List[Callable[[Tensor], Tensor]] | None = None, eta: Tensor | float = 0.001)[source]#

Bases: SampleReducingMCAcquisitionFunction

MC-based batch Probability of Improvement.

Estimates the probability of improvement over the current best observed value by sampling from the joint posterior distribution of the q-batch. MC-based estimates of a probability involves taking expectation of an indicator function; to support auto-differentiation, the indicator is replaced with a sigmoid function with temperature parameter tau.

qPI(X) = P(max Y >= best_f), Y ~ f(X), X = (x_1,…,x_q)

Example

>>> model = SingleTaskGP(train_X, train_Y)
>>> best_f = train_Y.max()[0]
>>> sampler = SobolQMCNormalSampler(1024)
>>> qPI = qProbabilityOfImprovement(model, best_f, sampler)
>>> qpi = qPI(test_X)
class obsidian.acquisition.botorch.qSimpleRegret(model: Model, sampler: MCSampler | None = None, objective: MCAcquisitionObjective | None = None, posterior_transform: PosteriorTransform | None = None, X_pending: Tensor | None = None)[source]#

Bases: SampleReducingMCAcquisitionFunction

MC-based batch Simple Regret.

Samples from the joint posterior over the q-batch and computes the simple regret.

qSR(X) = E(max Y), Y ~ f(X), X = (x_1,…,x_q)

Constraints should be provided as a ConstrainedMCObjective. Passing constraints as an argument is not supported. This is because SampleReducingMCAcquisitionFunction computes the acquisition values on the sample level and then weights the sample-level acquisition values by a soft feasibility indicator. Hence, it expects non-log acquisition function values to be non-negative. qSimpleRegret acquisition values can be negative, so we instead use a ConstrainedMCObjective which applies constraints to the objectives (e.g. before computing the acquisition function) and shifts negative objective values using by an infeasible cost to ensure non-negativity (before applying constraints and shifting them back).

Example

>>> model = SingleTaskGP(train_X, train_Y)
>>> sampler = SobolQMCNormalSampler(1024)
>>> qSR = qSimpleRegret(model, sampler)
>>> qsr = qSR(test_X)
class obsidian.acquisition.botorch.qUpperConfidenceBound(model: Model, beta: float, sampler: MCSampler | None = None, objective: MCAcquisitionObjective | None = None, posterior_transform: PosteriorTransform | None = None, X_pending: Tensor | None = None)[source]#

Bases: SampleReducingMCAcquisitionFunction

MC-based batch Upper Confidence Bound.

Uses a reparameterization to extend UCB to qUCB for q > 1 (See Appendix A of [Wilson2017reparam].)

qUCB = E(max(mu + |Y_tilde - mu|)), where Y_tilde ~ N(mu, beta pi/2 Sigma) and f(X) has distribution N(mu, Sigma).

Constraints should be provided as a ConstrainedMCObjective. Passing constraints as an argument is not supported. This is because SampleReducingMCAcquisitionFunction computes the acquisition values on the sample level and then weights the sample-level acquisition values by a soft feasibility indicator. Hence, it expects non-log acquisition function values to be non-negative. qSimpleRegret acquisition values can be negative, so we instead use a ConstrainedMCObjective which applies constraints to the objectives (e.g. before computing the acquisition function) and shifts negative objective values using by an infeasible cost to ensure non-negativity (before applying constraints and shifting them back).

Example

>>> model = SingleTaskGP(train_X, train_Y)
>>> sampler = SobolQMCNormalSampler(1024)
>>> qUCB = qUpperConfidenceBound(model, 0.1, sampler)
>>> qucb = qUCB(test_X)