Constrained Multi-Output Min-Max#
import obsidian
print(f'obsidian version: ' + obsidian.__version__)
import pandas as pd
import plotly.express as px
import plotly.io as pio
pio.renderers.default = "plotly_mimetype+notebook"
obsidian version: 0.8.0
Introduction¶
In this tutorial, we will see how to use obsidian for multi-output optimization. To demonstrate the versatility of the approach, we will seek to maximize one response while minimizing the other.
$$\underset{X}{argmax} HV\left(+f\left(y_1\right) -f\left(y_2\right)\right)$$
Furthermore, we will apply a linear constraint on the input variables; requiring that the $X_1 + X_2 \leq 6 $.
Set up parameter space and initialize a design¶
from obsidian import Campaign, Target, ParamSpace, BayesianOptimizer
from obsidian.parameters import Param_Continuous
params = [
Param_Continuous('X1', 0, 10),
Param_Continuous('X2', 0, 10),
]
X_space = ParamSpace(params)
target = [
Target('Response 1', aim='max'),
Target('Response 2', aim='min')
]
campaign = Campaign(X_space, target, seed=0)
X0 = campaign.designer.initialize(4, 'LHS')
X0
X1 | X2 | |
---|---|---|
0 | 6.25 | 8.75 |
1 | 1.25 | 6.25 |
2 | 3.75 | 1.25 |
3 | 8.75 | 3.75 |
Collect results (e.g. from a simulation)¶
from obsidian.experiment import Simulator
from obsidian.experiment.benchmark import branin_currin
simulator = Simulator(X_space, branin_currin, name='Response', eps=0.05)
y0 = simulator.simulate(X0)
Z0 = pd.concat([X0, y0], axis=1)
campaign.add_data(Z0)
campaign.data
X1 | X2 | Response 1 | Response 2 | Iteration | |
---|---|---|---|---|---|
Observation ID | |||||
0 | 6.25 | 8.75 | -154.239634 | -5.248404 | 0 |
1 | 1.25 | 6.25 | -8.751383 | -6.534766 | 0 |
2 | 3.75 | 1.25 | -29.269633 | -13.059490 | 0 |
3 | 8.75 | 3.75 | -26.305781 | -7.544486 | 0 |
Fit an optimizer and visualize results¶
campaign.fit()
GP model has been fit to data with a train-score of: 1 for response: Response 1 GP model has been fit to data with a train-score of: 1 for response: Response 2
from obsidian.plotting import surface_plot, optim_progress
surface_plot(campaign.optimizer)
Optimize new experiment suggestions¶
from obsidian.constraints import InConstraint_Generic
Note: It is a good idea to balance a set of acquisition functions with those that prefer design-space exploration. This helps to ensure that the optimizer is not severely misled by deficiencies in the dataset, particularly for small data. It also helps to ascertain a global optimum.
A simple choice is Space Filling (SF) although Negative Integrated Posterior Variance (NIPV) is available for single-output optimizations; and there are various other acquisiiton functions whose hyperparameters can be tuned to manage the "explore-exploit" balance.
X_suggest, eval_suggest = campaign.optimizer.suggest(acquisition = [{'NEHVI':{'ref_point':[-350, -20]}}, 'SF'],
# X1 + X2 <= 6, written as -X1 - X2 >= -6
ineq_constraints = [InConstraint_Generic(X_space, indices=[0,1], coeff=[-1,-1], rhs=-6)])
pd.concat([X_suggest, eval_suggest], axis=1)
X1 | X2 | Response 1 (pred) | Response 1 lb | Response 1 ub | Response 2 (pred) | Response 2 lb | Response 2 ub | f(Response 1) | f(Response 2) | aq Value | aq Value (joint) | aq Method | Expected Hypervolume (joint) | Expected Pareto | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1.855368e+00 | 3.467079e-15 | -38.805388 | -101.625151 | 24.014386 | -11.443693 | -8.463541 | -14.423845 | 0.236315 | 0.973132 | 4.233924 | 4.233924 | NEHVI | 3.389429 | False |
1 | 5.782685e-15 | 2.974608e+00 | -29.479032 | -100.736606 | 41.778553 | -9.269232 | -5.810519 | -12.727946 | 0.375486 | 0.340895 | 0.350581 | 0.350581 | SF | 3.406860 | False |
Collect data at new suggestions¶
y_iter1 = pd.DataFrame(simulator.simulate(X_suggest))
Z_iter1 = pd.concat([X_suggest, y_iter1, eval_suggest], axis=1)
campaign.add_data(Z_iter1)
campaign.data.tail()
X1 | X2 | Response 1 | Response 2 | Iteration | Response 1 (pred) | Response 1 lb | Response 1 ub | Response 2 (pred) | Response 2 lb | ... | f(Response 2) | aq Value | aq Value (joint) | aq Method | Expected Hypervolume (joint) | Expected Pareto | Response 1 (max) (iter) | Response 2 (max) (iter) | Hypervolume (iter) | Pareto Front | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Observation ID | |||||||||||||||||||||
1 | 1.250000e+00 | 6.250000e+00 | -8.751383 | -6.534766 | 0 | NaN | NaN | NaN | NaN | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | -8.751383 | -5.248404 | 949.270665 | True |
2 | 3.750000e+00 | 1.250000e+00 | -29.269633 | -13.059490 | 0 | NaN | NaN | NaN | NaN | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | -8.751383 | -5.248404 | 949.270665 | False |
3 | 8.750000e+00 | 3.750000e+00 | -26.305781 | -7.544486 | 0 | NaN | NaN | NaN | NaN | NaN | ... | NaN | NaN | NaN | NaN | NaN | NaN | -8.751383 | -5.248404 | 949.270665 | False |
4 | 1.855368e+00 | 3.467079e-15 | -116.993962 | -14.357309 | 1 | -38.805388 | -101.625151 | 24.014386 | -11.443693 | -8.463541 | ... | 0.973132 | 4.233924 | 4.233924 | NEHVI | 3.389429 | False | -8.751383 | -2.714921 | 1355.934274 | False |
5 | 5.782685e-15 | 2.974608e+00 | -178.155376 | -2.714921 | 1 | -29.479032 | -100.736606 | 41.778553 | -9.269232 | -5.810519 | ... | 0.340895 | 0.350581 | 0.350581 | SF | 3.406860 | False | -8.751383 | -2.714921 | 1355.934274 | True |
5 rows × 22 columns
Repeat as desired¶
for iter in range(5):
campaign.fit()
X_suggest, eval_suggest = campaign.optimizer.suggest(acquisition = [{'NEHVI':{'ref_point':[-350, -20]}}, 'SF'],
ineq_constraints = [InConstraint_Generic(X_space, indices=[0,1], coeff=[-1,-1], rhs=-6)])
y_iter = pd.DataFrame(simulator.simulate(X_suggest))
Z_iter = pd.concat([X_suggest, y_iter, eval_suggest], axis=1)
campaign.add_data(Z_iter)
GP model has been fit to data with a train-score of: 0.998 for response: Response 1 GP model has been fit to data with a train-score of: 0.999 for response: Response 2 GP model has been fit to data with a train-score of: 1 for response: Response 1 GP model has been fit to data with a train-score of: 1 for response: Response 2 GP model has been fit to data with a train-score of: 1 for response: Response 1 GP model has been fit to data with a train-score of: 1 for response: Response 2 GP model has been fit to data with a train-score of: 0.999 for response: Response 1 GP model has been fit to data with a train-score of: 1 for response: Response 2 GP model has been fit to data with a train-score of: 1 for response: Response 1 GP model has been fit to data with a train-score of: 1 for response: Response 2
optim_progress(campaign)
surface_plot(campaign.optimizer, response_id = 0)
surface_plot(campaign.optimizer, response_id = 1)