├── .github ├── ISSUE_TEMPLATE │ ├── bug_report.yml │ ├── config.yml │ └── feature_request.yml ├── PULL_REQUEST_TEMPLATE.md └── workflows │ ├── deploy_on_release.yml │ ├── docs.yml │ ├── lint.yml │ ├── nightly.yml │ ├── publish_website.yml │ ├── reusable_test.yml │ ├── reusable_tutorials.yml │ ├── test.yml │ ├── test_stable.yml │ └── tutorials_smoke_test.yml ├── .gitignore ├── .pre-commit-config.yaml ├── .readthedocs.yaml ├── CHANGELOG.md ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── MANIFEST.in ├── README.md ├── botorch ├── __init__.py ├── acquisition │ ├── __init__.py │ ├── acquisition.py │ ├── active_learning.py │ ├── analytic.py │ ├── bayesian_active_learning.py │ ├── cached_cholesky.py │ ├── cost_aware.py │ ├── decoupled.py │ ├── factory.py │ ├── fixed_feature.py │ ├── input_constructors.py │ ├── joint_entropy_search.py │ ├── knowledge_gradient.py │ ├── logei.py │ ├── max_value_entropy_search.py │ ├── monte_carlo.py │ ├── multi_objective │ │ ├── __init__.py │ │ ├── analytic.py │ │ ├── base.py │ │ ├── hypervolume_knowledge_gradient.py │ │ ├── joint_entropy_search.py │ │ ├── logei.py │ │ ├── max_value_entropy_search.py │ │ ├── monte_carlo.py │ │ ├── multi_fidelity.py │ │ ├── multi_output_risk_measures.py │ │ ├── objective.py │ │ ├── parego.py │ │ ├── predictive_entropy_search.py │ │ └── utils.py │ ├── multi_step_lookahead.py │ ├── objective.py │ ├── penalized.py │ ├── predictive_entropy_search.py │ ├── preference.py │ ├── prior_guided.py │ ├── proximal.py │ ├── risk_measures.py │ ├── thompson_sampling.py │ └── utils.py ├── cross_validation.py ├── exceptions │ ├── __init__.py │ ├── errors.py │ └── warnings.py ├── fit.py ├── generation │ ├── __init__.py │ ├── gen.py │ ├── sampling.py │ └── utils.py ├── logging.py ├── models │ ├── __init__.py │ ├── approximate_gp.py │ ├── contextual.py │ ├── contextual_multioutput.py │ ├── cost.py │ ├── deterministic.py │ ├── ensemble.py │ ├── fully_bayesian.py │ ├── fully_bayesian_multitask.py │ ├── gp_regression.py │ ├── gp_regression_fidelity.py │ ├── gp_regression_mixed.py │ ├── gpytorch.py │ ├── higher_order_gp.py │ ├── kernels │ │ ├── __init__.py │ │ ├── categorical.py │ │ ├── contextual_lcea.py │ │ ├── contextual_sac.py │ │ ├── downsampling.py │ │ ├── exponential_decay.py │ │ ├── infinite_width_bnn.py │ │ ├── linear_truncated_fidelity.py │ │ └── orthogonal_additive_kernel.py │ ├── latent_kronecker_gp.py │ ├── likelihoods │ │ ├── __init__.py │ │ ├── pairwise.py │ │ └── sparse_outlier_noise.py │ ├── map_saas.py │ ├── model.py │ ├── model_list_gp_regression.py │ ├── multitask.py │ ├── pairwise_gp.py │ ├── relevance_pursuit.py │ ├── robust_relevance_pursuit_model.py │ ├── transforms │ │ ├── __init__.py │ │ ├── factory.py │ │ ├── input.py │ │ ├── outcome.py │ │ └── utils.py │ └── utils │ │ ├── __init__.py │ │ ├── assorted.py │ │ ├── gpytorch_modules.py │ │ └── inducing_point_allocators.py ├── optim │ ├── __init__.py │ ├── closures │ │ ├── __init__.py │ │ ├── core.py │ │ └── model_closures.py │ ├── core.py │ ├── fit.py │ ├── homotopy.py │ ├── initializers.py │ ├── optimize.py │ ├── optimize_homotopy.py │ ├── optimize_mixed.py │ ├── parameter_constraints.py │ ├── stopping.py │ └── utils │ │ ├── __init__.py │ │ ├── acquisition_utils.py │ │ ├── common.py │ │ ├── model_utils.py │ │ ├── numpy_utils.py │ │ └── timeout.py ├── posteriors │ ├── __init__.py │ ├── base_samples.py │ ├── ensemble.py │ ├── fully_bayesian.py │ ├── gpytorch.py │ ├── higher_order.py │ ├── latent_kronecker.py │ ├── multitask.py │ ├── posterior.py │ ├── posterior_list.py │ ├── torch.py │ └── transformed.py ├── sampling │ ├── __init__.py │ ├── base.py │ ├── get_sampler.py │ ├── index_sampler.py │ ├── list_sampler.py │ ├── normal.py │ ├── pairwise_samplers.py │ ├── pathwise │ │ ├── __init__.py │ │ ├── features │ │ │ ├── __init__.py │ │ │ ├── generators.py │ │ │ └── maps.py │ │ ├── paths.py │ │ ├── posterior_samplers.py │ │ ├── prior_samplers.py │ │ ├── update_strategies.py │ │ └── utils.py │ ├── qmc.py │ └── stochastic_samplers.py ├── settings.py ├── test_functions │ ├── __init__.py │ ├── base.py │ ├── multi_fidelity.py │ ├── multi_objective.py │ ├── multi_objective_multi_fidelity.py │ ├── sensitivity_analysis.py │ ├── synthetic.py │ └── utils.py ├── test_utils │ ├── __init__.py │ └── mock.py └── utils │ ├── __init__.py │ ├── constants.py │ ├── constraints.py │ ├── containers.py │ ├── context_managers.py │ ├── datasets.py │ ├── dispatcher.py │ ├── evaluation.py │ ├── feasible_volume.py │ ├── low_rank.py │ ├── multi_objective │ ├── __init__.py │ ├── box_decompositions │ │ ├── __init__.py │ │ ├── box_decomposition.py │ │ ├── box_decomposition_list.py │ │ ├── dominated.py │ │ ├── non_dominated.py │ │ └── utils.py │ ├── hypervolume.py │ ├── pareto.py │ └── scalarization.py │ ├── multitask.py │ ├── objective.py │ ├── probability │ ├── __init__.py │ ├── bvn.py │ ├── lin_ess.py │ ├── linalg.py │ ├── mvnxpb.py │ ├── truncated_multivariate_normal.py │ ├── unified_skew_normal.py │ └── utils.py │ ├── rounding.py │ ├── safe_math.py │ ├── sampling.py │ ├── test_helpers.py │ ├── testing.py │ ├── torch.py │ ├── transforms.py │ └── types.py ├── botorch_community ├── README.md ├── __init__.py ├── acquisition │ ├── __init__.py │ ├── augmented_multisource.py │ ├── bayesian_active_learning.py │ ├── bll_thompson_sampling.py │ ├── discretized.py │ ├── hentropy_search.py │ ├── input_constructors.py │ ├── rei.py │ └── scorebo.py ├── models │ ├── __init__.py │ ├── blls.py │ ├── example.py │ ├── gp_regression_multisource.py │ ├── prior_fitted_network.py │ ├── utils │ │ └── prior_fitted_network.py │ ├── vbll_helper.py │ └── vblls.py ├── posteriors │ ├── __init__.py │ ├── bll_posterior.py │ └── riemann.py └── utils │ └── stat_dist.py ├── botorch_logo_lockup.png ├── botorch_logo_lockup.svg ├── docs ├── acquisition.md ├── assets │ ├── EI_MC_qMC.png │ ├── EI_optimal_val_hist.png │ ├── EI_optimizer_hist.png │ ├── EI_resampling_fixed.png │ ├── botorch_and_ax.svg │ ├── mc_acq_illustration.svg │ ├── overview_bayesopt.svg │ ├── overview_blackbox.svg │ └── overview_mcacquisition.svg ├── batching.md ├── botorch_and_ax.md ├── constraints.md ├── design_philosophy.md ├── getting_started.mdx ├── introduction.md ├── models.md ├── multi_objective.md ├── objectives.md ├── optimization.md ├── overview.md ├── papers.md ├── posteriors.md ├── samplers.md └── tutorials │ └── index.mdx ├── notebooks_community ├── README.md ├── clf_constrained_bo.ipynb ├── example.ipynb ├── hentropy_search.ipynb ├── multi_source_bo.ipynb └── vbll_thompson_sampling.ipynb ├── pyproject.toml ├── requirements-fmt.txt ├── requirements.txt ├── scripts ├── build_docs.sh ├── check_pre_commit_reqs.py ├── convert_ipynb_to_mdx.py ├── parse_sphinx.py ├── patch_site_config.py ├── run_tutorials.py ├── validate_sphinx.py └── verify_py_packages.sh ├── setup.cfg ├── setup.py ├── sphinx ├── Makefile ├── README.md ├── make.bat └── source │ ├── _static │ └── custom.css │ ├── acquisition.rst │ ├── conf.py │ ├── cross_validation.rst │ ├── exceptions.rst │ ├── fit.rst │ ├── generation.rst │ ├── index.rst │ ├── logging.rst │ ├── models.rst │ ├── optim.rst │ ├── posteriors.rst │ ├── sampling.rst │ ├── settings.rst │ ├── test_functions.rst │ ├── test_utils.rst │ └── utils.rst ├── test ├── __init__.py ├── acquisition │ ├── __init__.py │ ├── multi_objective │ │ ├── __init__.py │ │ ├── test_analytic.py │ │ ├── test_base.py │ │ ├── test_hypervolume_knowledge_gradient.py │ │ ├── test_joint_entropy_search.py │ │ ├── test_logei.py │ │ ├── test_max_value_entropy_search.py │ │ ├── test_monte_carlo.py │ │ ├── test_multi_fidelity.py │ │ ├── test_multi_output_risk_measures.py │ │ ├── test_objective.py │ │ ├── test_parego.py │ │ ├── test_predictive_entropy_search.py │ │ └── test_utils.py │ ├── test_acquisition.py │ ├── test_active_learning.py │ ├── test_analytic.py │ ├── test_bayesian_active_learning.py │ ├── test_cached_cholesky.py │ ├── test_cost_aware.py │ ├── test_decoupled.py │ ├── test_factory.py │ ├── test_fixed_feature.py │ ├── test_input_constructors.py │ ├── test_integration.py │ ├── test_joint_entropy_search.py │ ├── test_knowledge_gradient.py │ ├── test_logei.py │ ├── test_max_value_entropy_search.py │ ├── test_monte_carlo.py │ ├── test_multi_step_lookahead.py │ ├── test_objective.py │ ├── test_penalized.py │ ├── test_predictive_entropy_search.py │ ├── test_preference.py │ ├── test_prior_guided.py │ ├── test_proximal.py │ ├── test_risk_measures.py │ ├── test_thompson_sampling.py │ └── test_utils.py ├── exceptions │ ├── __init__.py │ ├── test_errors.py │ └── test_warnings.py ├── generation │ ├── __init__.py │ ├── test_gen.py │ ├── test_sampling.py │ └── test_utils.py ├── models │ ├── __init__.py │ ├── kernels │ │ ├── __init__.py │ │ ├── test_categorical.py │ │ ├── test_contextual.py │ │ ├── test_downsampling.py │ │ ├── test_exponential_decay.py │ │ ├── test_infinite_width_bnn.py │ │ ├── test_linear_truncated_fidelity.py │ │ └── test_orthogonal_additive_kernel.py │ ├── likelihoods │ │ └── test_pairwise_likelihood.py │ ├── test_approximate_gp.py │ ├── test_contextual.py │ ├── test_contextual_multioutput.py │ ├── test_cost.py │ ├── test_deterministic.py │ ├── test_ensemble.py │ ├── test_fully_bayesian.py │ ├── test_fully_bayesian_multitask.py │ ├── test_gp_regression.py │ ├── test_gp_regression_fidelity.py │ ├── test_gp_regression_mixed.py │ ├── test_gpytorch.py │ ├── test_higher_order_gp.py │ ├── test_latent_kronecker_gp.py │ ├── test_map_saas.py │ ├── test_model.py │ ├── test_model_list_gp_regression.py │ ├── test_multitask.py │ ├── test_pairwise_gp.py │ ├── test_relevance_pursuit.py │ ├── transforms │ │ ├── test_factory.py │ │ ├── test_input.py │ │ ├── test_outcome.py │ │ └── test_utils.py │ └── utils │ │ ├── __init__.py │ │ ├── test_assorted.py │ │ ├── test_gpytorch_modules.py │ │ └── test_inducing_point_allocators.py ├── optim │ ├── __init__.py │ ├── closures │ │ ├── __init__.py │ │ ├── test_core.py │ │ └── test_model_closures.py │ ├── test_core.py │ ├── test_fit.py │ ├── test_homotopy.py │ ├── test_initializers.py │ ├── test_numpy_converter.py │ ├── test_optimize.py │ ├── test_optimize_mixed.py │ ├── test_parameter_constraints.py │ ├── test_stopping.py │ └── utils │ │ ├── __init__.py │ │ ├── test_acquisition_utils.py │ │ ├── test_common.py │ │ ├── test_model_utils.py │ │ ├── test_numpy_utils.py │ │ └── test_timeout.py ├── posteriors │ ├── __init__.py │ ├── test_deterministic.py │ ├── test_ensemble.py │ ├── test_gpytorch.py │ ├── test_higher_order.py │ ├── test_multitask.py │ ├── test_posterior.py │ ├── test_posteriorlist.py │ ├── test_torch_posterior.py │ └── test_transformed.py ├── sampling │ ├── __init__.py │ ├── pathwise │ │ ├── __init__.py │ │ ├── features │ │ │ ├── __init__.py │ │ │ ├── test_generators.py │ │ │ └── test_maps.py │ │ ├── test_paths.py │ │ ├── test_posterior_samplers.py │ │ ├── test_prior_samplers.py │ │ ├── test_update_strategies.py │ │ └── test_utils.py │ ├── test_base.py │ ├── test_deterministic.py │ ├── test_get_sampler.py │ ├── test_index_sampler.py │ ├── test_list_sampler.py │ ├── test_normal.py │ ├── test_pairwise_sampler.py │ ├── test_qmc.py │ └── test_stochastic_samplers.py ├── test_cross_validation.py ├── test_cuda.py ├── test_end_to_end.py ├── test_fit.py ├── test_functions │ ├── __init__.py │ ├── test_base.py │ ├── test_multi_fidelity.py │ ├── test_multi_objective.py │ ├── test_multi_objective_multi_fidelity.py │ ├── test_sensitivity_analysis.py │ └── test_synthetic.py ├── test_logging.py ├── test_settings.py ├── test_utils │ └── test_mock.py └── utils │ ├── __init__.py │ ├── multi_objective │ ├── __init__.py │ ├── box_decompositions │ │ ├── __init__.py │ │ ├── test_box_decomposition.py │ │ ├── test_box_decomposition_list.py │ │ ├── test_dominated.py │ │ ├── test_non_dominated.py │ │ └── test_utils.py │ ├── test_hypervolume.py │ ├── test_pareto.py │ └── test_scalarization.py │ ├── probability │ ├── __init__.py │ ├── test_bvn.py │ ├── test_lin_ess.py │ ├── test_linalg.py │ ├── test_mvnxpb.py │ ├── test_truncated_multivariate_normal.py │ ├── test_unified_skew_normal.py │ └── test_utils.py │ ├── test_constants.py │ ├── test_constraints.py │ ├── test_containers.py │ ├── test_context_managers.py │ ├── test_datasets.py │ ├── test_dispatcher.py │ ├── test_evaluation.py │ ├── test_feasible_volume.py │ ├── test_low_rank.py │ ├── test_multitask.py │ ├── test_objective.py │ ├── test_rounding.py │ ├── test_safe_math.py │ ├── test_sampling.py │ ├── test_test_helpers.py │ ├── test_testing.py │ ├── test_torch.py │ └── test_transforms.py ├── test_community ├── __init__.py ├── acquisition │ ├── __init__.py │ ├── test_bayesian_active_learning.py │ ├── test_bll_thompson_sampling.py │ ├── test_discretized.py │ ├── test_hentropy_search.py │ ├── test_input_constructors.py │ ├── test_multi_source.py │ ├── test_rei.py │ └── test_scorebo.py ├── models │ ├── __init__.py │ ├── test_example.py │ ├── test_gp_regression_multisource.py │ ├── test_prior_fitted_network.py │ ├── test_vbll_helper.py │ └── test_vblls.py ├── posteriors │ ├── test_bll.py │ └── test_riemann.py └── utils │ └── test_stat_dist.py ├── tutorials ├── GIBBON_for_efficient_batch_entropy_search │ └── GIBBON_for_efficient_batch_entropy_search.ipynb ├── Multi_objective_multi_fidelity_BO │ └── Multi_objective_multi_fidelity_BO.ipynb ├── README.md ├── batch_mode_cross_validation │ └── batch_mode_cross_validation.ipynb ├── baxus │ └── baxus.ipynb ├── bo_with_warped_gp │ └── bo_with_warped_gp.ipynb ├── bope │ └── bope.ipynb ├── closed_loop_botorch_only │ └── closed_loop_botorch_only.ipynb ├── compare_mc_analytic_acquisition │ └── compare_mc_analytic_acquisition.ipynb ├── composite_bo_with_hogp │ └── composite_bo_with_hogp.ipynb ├── composite_mtbo │ └── composite_mtbo.ipynb ├── constrained_multi_objective_bo │ └── constrained_multi_objective_bo.ipynb ├── constraint_active_search │ └── constraint_active_search.ipynb ├── cost_aware_bayesian_optimization │ └── cost_aware_bayesian_optimization.ipynb ├── custom_acquisition │ └── custom_acquisition.ipynb ├── custom_model │ └── custom_model.ipynb ├── data │ └── twitter_flash_crash.csv ├── decoupled_mobo │ └── decoupled_mobo.ipynb ├── discrete_multi_fidelity_bo │ └── discrete_multi_fidelity_bo.ipynb ├── fit_model_with_torch_optimizer │ └── fit_model_with_torch_optimizer.ipynb ├── ibnn_bo │ └── ibnn_bo.ipynb ├── information_theoretic_acquisition_functions │ └── information_theoretic_acquisition_functions.ipynb ├── max_value_entropy │ └── max_value_entropy.ipynb ├── meta_learning_with_rgpe │ └── meta_learning_with_rgpe.ipynb ├── multi_fidelity_bo │ └── multi_fidelity_bo.ipynb ├── multi_objective_bo │ └── multi_objective_bo.ipynb ├── one_shot_kg │ └── one_shot_kg.ipynb ├── optimize_stochastic │ └── optimize_stochastic.ipynb ├── optimize_with_cmaes │ └── optimize_with_cmaes.ipynb ├── preference_bo │ └── preference_bo.ipynb ├── pretrained_models │ ├── mnist_cnn.pt │ └── mnist_vae.pt ├── relevance_pursuit_robust_regression │ └── relevance_pursuit_robust_regression.ipynb ├── risk_averse_bo_with_environmental_variables │ └── risk_averse_bo_with_environmental_variables.ipynb ├── risk_averse_bo_with_input_perturbations │ └── risk_averse_bo_with_input_perturbations.ipynb ├── robust_multi_objective_bo │ └── robust_multi_objective_bo.ipynb ├── saasbo │ └── saasbo.ipynb ├── scalable_constrained_bo │ └── scalable_constrained_bo.ipynb ├── thompson_sampling │ └── thompson_sampling.ipynb ├── turbo_1 │ └── turbo_1.ipynb └── vae_mnist │ └── vae_mnist.ipynb └── website ├── docusaurus.config.js ├── package.json ├── sidebars.js ├── src ├── components │ ├── CellOutput.jsx │ ├── LinkButtons.jsx │ └── Plotting.jsx ├── css │ └── customTheme.css └── pages │ └── index.js ├── static ├── .nojekyll ├── CNAME ├── css │ ├── alabaster.css │ ├── basic.css │ ├── code_block_buttons.css │ └── custom.css ├── img │ ├── botorch.ico │ ├── botorch.png │ ├── botorch_logo_lockup.png │ ├── botorch_logo_lockup.svg │ ├── botorch_logo_lockup_top.png │ ├── botorch_logo_lockup_white.png │ ├── expanding_arrows.svg │ ├── meta_opensource_logo_negative.svg │ ├── oss_logo.png │ ├── puzzle_pieces.svg │ └── pytorch_logo.svg ├── js │ └── code_block_buttons.js └── pygments.css ├── tutorials.json └── yarn.lock /.github/ISSUE_TEMPLATE/bug_report.yml: -------------------------------------------------------------------------------- 1 | name: 🐛 Bug Report 2 | description: File a bug report. 3 | labels: ["bug"] 4 | title: "[Bug]: " 5 | body: 6 | - type: markdown 7 | attributes: 8 | value: | 9 | Thank you for taking the time to fill out a bug report. We strive to make BoTorch a useful and stable library for all our users. 10 | - type: textarea 11 | id: what-happened 12 | attributes: 13 | label: What happened? 14 | description: Provide a detailed description of the bug as well as the expected behavior. 15 | validations: 16 | required: true 17 | - type: textarea 18 | id: repro 19 | attributes: 20 | label: Please provide a minimal, reproducible example of the unexpected behavior. 21 | description: Follow [these guidelines](https://stackoverflow.com/help/minimal-reproducible-example) for writing your example. Please ensure that the provided example is complete & runnable, including with dummy data if necessary. 22 | validations: 23 | required: true 24 | - type: textarea 25 | id: traceback 26 | attributes: 27 | label: Please paste any relevant traceback/logs produced by the example provided. 28 | description: This will be automatically formatted into code, so no need for backticks. 29 | render: shell 30 | - type: input 31 | id: botorch-version 32 | attributes: 33 | label: BoTorch Version 34 | description: What version of BoTorch are you using? 35 | validations: 36 | required: true 37 | - type: input 38 | id: python-version 39 | attributes: 40 | label: Python Version 41 | description: What version of Python are you using? 42 | validations: 43 | required: false 44 | - type: input 45 | id: os 46 | attributes: 47 | label: Operating System 48 | description: What operating system are you using? 49 | validations: 50 | required: false 51 | - type: textarea 52 | id: suggested-fix 53 | attributes: 54 | label: (Optional) Describe any potential fixes you've considered to the issue outlined above. 55 | - type: dropdown 56 | id: pull-request 57 | attributes: 58 | label: Pull Request 59 | description: Are you willing to open a pull request fixing the bug outlined in this issue? (See [Contributing to BoTorch](https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md)) 60 | options: 61 | - "Yes" 62 | - "No" 63 | - type: checkboxes 64 | id: terms 65 | attributes: 66 | label: Code of Conduct 67 | description: By submitting this issue, you agree to follow BoTorch's [Code of Conduct](https://github.com/pytorch/botorch/blob/main/CODE_OF_CONDUCT.md). 68 | options: 69 | - label: I agree to follow BoTorch's Code of Conduct 70 | required: true 71 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/config.yml: -------------------------------------------------------------------------------- 1 | blank_issues_enabled: false 2 | contact_links: 3 | - name: General Support & Discussions 4 | url: https://github.com/pytorch/botorch/discussions 5 | about: Please use discussions for general support requests and to ask broader questions related to BoTorch. This includes any support request that does not require changes to BoTorch source code. 6 | - name: Bug Bounty Program 7 | url: https://bugbounty.meta.com/ 8 | about: Meta has a bounty program for the safe disclosure of security bugs. Please go through the process outlined on that page and do not file a public issue. 9 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.yml: -------------------------------------------------------------------------------- 1 | name: 🚀 Feature Request 2 | description: Request a feature or improvement to BoTorch. 3 | labels: ["enhancement"] 4 | title: "[FEATURE REQUEST]: " 5 | body: 6 | - type: markdown 7 | attributes: 8 | value: | 9 | Thank you for taking the time to request a feature! We strive to make BoTorch a useful and rich library for our users. 10 | - type: textarea 11 | id: motivation 12 | attributes: 13 | label: Motivation 14 | description: Provide a detailed description of the problem you would like to solve via this new feature or improvement. 15 | validations: 16 | required: true 17 | - type: textarea 18 | id: pitch 19 | attributes: 20 | label: Describe the solution you'd like to see implemented in BoTorch. 21 | validations: 22 | required: true 23 | - type: textarea 24 | id: alternatives 25 | attributes: 26 | label: Describe any alternatives you've considered to the above solution. 27 | - type: textarea 28 | id: related 29 | attributes: 30 | label: Is this related to an existing issue in BoTorch or another repository? If so please include links to those Issues here. 31 | - type: dropdown 32 | id: pull-request 33 | attributes: 34 | label: Pull Request 35 | description: Are you willing to open a pull request implementing this feature? (See [Contributing to BoTorch](https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md)) 36 | options: 37 | - "Yes" 38 | - "No" 39 | - type: checkboxes 40 | id: terms 41 | attributes: 42 | label: Code of Conduct 43 | description: By submitting this issue, you agree to follow BoTorch's [Code of Conduct](https://github.com/pytorch/botorch/blob/main/CODE_OF_CONDUCT.md). 44 | options: 45 | - label: I agree to follow BoTorch's Code of Conduct 46 | required: true 47 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | 8 | 9 | ## Motivation 10 | 11 | (Write your motivation here.) 12 | 13 | ### Have you read the [Contributing Guidelines on pull requests](https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md#pull-requests)? 14 | 15 | (Write your answer here.) 16 | 17 | ## Test Plan 18 | 19 | (Write your test plan here. If you changed any code, please provide us with clear instructions on how you verified your changes work. Bonus points for screenshots and videos!) 20 | 21 | ## Related PRs 22 | 23 | (If this PR adds or changes functionality, please take some time to update the docs at https://github.com/pytorch/botorch, and link to your PR here.) 24 | -------------------------------------------------------------------------------- /.github/workflows/docs.yml: -------------------------------------------------------------------------------- 1 | name: Docs 2 | 3 | on: 4 | push: 5 | branches: [ main ] 6 | pull_request: 7 | branches: [ main ] 8 | workflow_dispatch: 9 | 10 | 11 | jobs: 12 | 13 | docs: 14 | name: Build docs 15 | runs-on: ubuntu-latest 16 | strategy: 17 | fail-fast: false 18 | env: 19 | # `uv pip ...` requires venv by default. This skips that requirement. 20 | UV_SYSTEM_PYTHON: 1 21 | steps: 22 | - uses: actions/checkout@v4 23 | - name: Install uv 24 | uses: astral-sh/setup-uv@v5 25 | - name: Set up Python 26 | uses: actions/setup-python@v5 27 | with: 28 | python-version: "3.13" 29 | - name: Install dependencies 30 | env: 31 | ALLOW_LATEST_GPYTORCH_LINOP: true 32 | run: | 33 | uv pip install git+https://github.com/cornellius-gp/linear_operator.git 34 | uv pip install git+https://github.com/cornellius-gp/gpytorch.git 35 | uv pip install ."[dev, tutorials]" 36 | - name: Validate Sphinx 37 | run: | 38 | python scripts/validate_sphinx.py -p "$(pwd)" 39 | - name: Run sphinx 40 | run: | 41 | # warnings treated as errors 42 | sphinx-build -WT --keep-going sphinx/source sphinx/build 43 | - name: Validate and parse tutorials 44 | run: | 45 | python scripts/convert_ipynb_to_mdx.py --clean 46 | -------------------------------------------------------------------------------- /.github/workflows/lint.yml: -------------------------------------------------------------------------------- 1 | name: Lint 2 | 3 | on: 4 | push: 5 | branches: [ main ] 6 | pull_request: 7 | branches: [ main ] 8 | workflow_dispatch: 9 | 10 | 11 | jobs: 12 | 13 | lint: 14 | runs-on: ubuntu-latest 15 | env: 16 | # `uv pip ...` requires venv by default. This skips that requirement. 17 | UV_SYSTEM_PYTHON: 1 18 | steps: 19 | - uses: actions/checkout@v4 20 | - name: Install uv 21 | uses: astral-sh/setup-uv@v5 22 | - name: Set up Python 23 | uses: actions/setup-python@v5 24 | with: 25 | python-version: "3.13" 26 | 27 | - name: Install dependencies 28 | run: uv pip install pre-commit 29 | 30 | - name: Run pre-commit 31 | run: pre-commit run --all-files --show-diff-on-failure 32 | -------------------------------------------------------------------------------- /.github/workflows/nightly.yml: -------------------------------------------------------------------------------- 1 | name: Nightly Cron 2 | 3 | on: 4 | schedule: 5 | # 2:30 PST 6 | - cron: '30 10 * * *' 7 | workflow_dispatch: 8 | 9 | 10 | jobs: 11 | tests-and-coverage-nightly: 12 | name: Test & Coverage 13 | uses: ./.github/workflows/reusable_test.yml 14 | with: 15 | use_latest_pytorch_gpytorch: true 16 | secrets: inherit 17 | 18 | package-test-deploy-pypi: 19 | name: Package and test deployment to test.pypi.org 20 | runs-on: ubuntu-latest 21 | permissions: 22 | id-token: write # This is required for PyPI OIDC authentication. 23 | env: 24 | # `uv pip ...` requires venv by default. This skips that requirement. 25 | UV_SYSTEM_PYTHON: 1 26 | steps: 27 | - uses: actions/checkout@v4 28 | - name: Fetch all history for all tags and branches 29 | run: git fetch --prune --unshallow 30 | - name: Install uv 31 | uses: astral-sh/setup-uv@v5 32 | - name: Set up Python 33 | uses: actions/setup-python@v5 34 | with: 35 | python-version: "3.13" 36 | - name: Install dependencies 37 | env: 38 | ALLOW_LATEST_GPYTORCH_LINOP: true 39 | run: | 40 | uv pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu 41 | uv pip install git+https://github.com/cornellius-gp/linear_operator.git 42 | uv pip install git+https://github.com/cornellius-gp/gpytorch.git 43 | uv pip install .[test] 44 | uv pip install --upgrade build setuptools setuptools_scm wheel 45 | - name: Extract reduced version and save to env var 46 | # strip the commit hash from the version to enable upload to pypi 47 | # env var will persist for subsequent steps 48 | run: | 49 | no_local_version=$(python -m setuptools_scm | cut -d "+" -f 1) 50 | echo "SETUPTOOLS_SCM_PRETEND_VERSION=${no_local_version}" >> $GITHUB_ENV 51 | - name: Build packages (wheel and source distribution) 52 | env: 53 | ALLOW_LATEST_GPYTORCH_LINOP: true 54 | run: | 55 | python -m build --sdist --wheel 56 | - name: Verify packages 57 | env: 58 | ALLOW_LATEST_GPYTORCH_LINOP: true 59 | run: | 60 | ./scripts/verify_py_packages.sh 61 | - name: Deploy to Test PyPI 62 | uses: pypa/gh-action-pypi-publish@release/v1 63 | with: 64 | repository-url: https://test.pypi.org/legacy/ 65 | skip-existing: true 66 | verbose: true 67 | 68 | publish-latest-website: 69 | name: Publish latest website 70 | needs: [tests-and-coverage-nightly, package-test-deploy-pypi] 71 | uses: ./.github/workflows/publish_website.yml 72 | permissions: 73 | pages: write 74 | id-token: write 75 | contents: write 76 | 77 | run_tutorials: 78 | name: Run tutorials without smoke test on latest PyTorch / GPyTorch 79 | uses: ./.github/workflows/reusable_tutorials.yml 80 | with: 81 | smoke_test: false 82 | use_stable_pytorch_gpytorch: false 83 | -------------------------------------------------------------------------------- /.github/workflows/publish_website.yml: -------------------------------------------------------------------------------- 1 | name: Publish Website 2 | 3 | on: 4 | workflow_call: 5 | inputs: 6 | new_version: 7 | required: false 8 | type: string 9 | run_tutorials: 10 | required: false 11 | type: boolean 12 | default: false 13 | workflow_dispatch: 14 | 15 | 16 | jobs: 17 | 18 | build-website: 19 | runs-on: ubuntu-latest 20 | env: 21 | # `uv pip ...` requires venv by default. This skips that requirement. 22 | UV_SYSTEM_PYTHON: 1 23 | permissions: 24 | contents: write 25 | steps: 26 | - uses: actions/checkout@v4 27 | with: 28 | ref: 'docusaurus-versions' # release branch 29 | fetch-depth: 0 30 | - name: Install uv 31 | uses: astral-sh/setup-uv@v5 32 | - name: Sync release branch with main 33 | run: | 34 | git config --global user.name "github-actions[bot]" 35 | git config --global user.email "41898282+github-actions[bot]@users.noreply.github.com" 36 | git merge origin/main 37 | # To avoid a large number of commits we don't push this sync commit to github until a new release. 38 | - name: Set up Python 39 | uses: actions/setup-python@v5 40 | with: 41 | python-version: "3.13" 42 | - if: ${{ !inputs.new_version }} 43 | name: Install latest GPyTorch and Linear Operator 44 | run: | 45 | uv pip install git+https://github.com/cornellius-gp/linear_operator.git 46 | uv pip install git+https://github.com/cornellius-gp/gpytorch.git 47 | - name: Install dependencies 48 | env: 49 | ALLOW_LATEST_GPYTORCH_LINOP: true 50 | run: | 51 | uv pip install ."[dev, tutorials]" 52 | - if: ${{ inputs.new_version }} 53 | name: Create new docusaurus version 54 | run: | 55 | python3 scripts/convert_ipynb_to_mdx.py --clean 56 | cd website 57 | yarn 58 | yarn docusaurus docs:version ${{ inputs.new_version }} 59 | 60 | git add versioned_docs/ versioned_sidebars/ versions.json 61 | git commit -m "Create version ${{ inputs.new_version }} of site in Docusaurus" 62 | git push --force origin HEAD:docusaurus-versions 63 | - name: Build website 64 | run: | 65 | bash scripts/build_docs.sh -b 66 | - name: Upload website build as artifact 67 | id: deployment 68 | uses: actions/upload-pages-artifact@v3 69 | with: 70 | path: website/build/ 71 | 72 | deploy-website: 73 | needs: build-website 74 | permissions: 75 | pages: write 76 | id-token: write 77 | environment: 78 | name: github-pages 79 | url: ${{ steps.deployment.outputs.page_url }} 80 | runs-on: ubuntu-latest 81 | steps: 82 | - name: Deploy to GitHub Pages 83 | id: deployment 84 | uses: actions/deploy-pages@v4 85 | -------------------------------------------------------------------------------- /.github/workflows/reusable_test.yml: -------------------------------------------------------------------------------- 1 | name: Reusable Test Workflow 2 | 3 | on: 4 | workflow_call: 5 | inputs: 6 | use_latest_pytorch_gpytorch: 7 | required: true 8 | type: boolean 9 | upload_coverage: 10 | required: false 11 | type: boolean 12 | default: true 13 | 14 | 15 | jobs: 16 | tests-and-coverage: 17 | name: Tests and coverage (Python ${{ matrix.python-version }}, ${{ matrix.os }}) 18 | runs-on: ${{ matrix.os }} 19 | strategy: 20 | fail-fast: false 21 | matrix: 22 | os: ["ubuntu-latest", "macos-14", "windows-latest"] 23 | python-version: ["3.10", "3.13"] 24 | env: 25 | # `uv pip ...` requires venv by default. This skips that requirement. 26 | UV_SYSTEM_PYTHON: 1 27 | steps: 28 | - uses: actions/checkout@v4 29 | - name: Install uv 30 | uses: astral-sh/setup-uv@v5 31 | - name: Set up Python ${{ matrix.python-version }} 32 | uses: actions/setup-python@v5 33 | with: 34 | python-version: ${{ matrix.python-version }} 35 | - name: Install (auto-install dependencies) 36 | if: ${{ !inputs.use_latest_pytorch_gpytorch }} 37 | run: | 38 | uv pip install .[test] 39 | - name: Install dependencies with latest PyTorch & GPyTorch 40 | if: ${{ inputs.use_latest_pytorch_gpytorch }} 41 | env: 42 | ALLOW_LATEST_GPYTORCH_LINOP: true 43 | run: | 44 | uv pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu 45 | uv pip install git+https://github.com/cornellius-gp/linear_operator.git 46 | uv pip install git+https://github.com/cornellius-gp/gpytorch.git 47 | uv pip install .[test] 48 | - name: Unit tests and coverage -- BoTorch 49 | run: | 50 | pytest -ra test/ --cov botorch/ --cov-report term-missing --cov-report xml:botorch_cov.xml 51 | - name: Unit tests and coverage -- BoTorch Community 52 | run: | 53 | pytest -ra test_community/ --cov botorch_community/ --cov-report term-missing --cov-report xml:botorch_community_cov.xml 54 | - name: Upload coverage 55 | if: ${{ inputs.upload_coverage && runner.os == 'Linux' && matrix.python-version == 3.10 }} 56 | uses: codecov/codecov-action@v4 57 | with: 58 | fail_ci_if_error: true 59 | token: ${{ secrets.CODECOV_TOKEN }} 60 | files: ./botorch_cov.xml,./botorch_community_cov.xml 61 | -------------------------------------------------------------------------------- /.github/workflows/reusable_tutorials.yml: -------------------------------------------------------------------------------- 1 | name: Reusable Tutorials Workflow 2 | 3 | on: 4 | workflow_dispatch: 5 | inputs: 6 | smoke_test: 7 | required: true 8 | type: boolean 9 | use_stable_pytorch_gpytorch: 10 | required: true 11 | type: boolean 12 | workflow_call: 13 | inputs: 14 | smoke_test: 15 | required: false 16 | type: boolean 17 | default: true 18 | use_stable_pytorch_gpytorch: 19 | required: false 20 | type: boolean 21 | default: false 22 | 23 | jobs: 24 | tutorials: 25 | name: Run tutorials 26 | runs-on: ubuntu-latest 27 | env: 28 | # `uv pip ...` requires venv by default. This skips that requirement. 29 | UV_SYSTEM_PYTHON: 1 30 | steps: 31 | - uses: actions/checkout@v4 32 | - name: Install uv 33 | uses: astral-sh/setup-uv@v5 34 | - name: Set up Python 35 | uses: actions/setup-python@v5 36 | with: 37 | python-version: "3.13" 38 | - name: Fetch all history for all tags and branches 39 | # We need to do this so setuptools_scm knows how to set the BoTorch version. 40 | run: git fetch --prune --unshallow 41 | - if: ${{ !inputs.use_stable_pytorch_gpytorch }} 42 | name: Install latest PyTorch & GPyTorch 43 | run: | 44 | uv pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cpu 45 | uv pip install git+https://github.com/cornellius-gp/linear_operator.git 46 | uv pip install git+https://github.com/cornellius-gp/gpytorch.git 47 | - if: ${{ inputs.use_stable_pytorch_gpytorch }} 48 | name: Install min required PyTorch, GPyTorch, and linear_operator 49 | run: | 50 | python setup.py egg_info 51 | req_txt="botorch.egg-info/requires.txt" 52 | min_torch_version=$(grep '\btorch[>=]=' ${req_txt} | sed 's/[^0-9.]//g') 53 | min_gpytorch_version=$(grep '\bgpytorch[>=]=' ${req_txt} | sed 's/[^0-9.]//g') 54 | min_linear_operator_version=$(grep '\blinear_operator[>=]=' ${req_txt} | sed 's/[^0-9.]//g') 55 | uv pip install "numpy<2" # Numpy >2.0 is not compatible with PyTorch <2.2. 56 | uv pip install "torch==${min_torch_version}" "gpytorch==${min_gpytorch_version}" "linear_operator==${min_linear_operator_version}" torchvision 57 | - name: Install BoTorch with tutorials dependencies 58 | env: 59 | ALLOW_LATEST_GPYTORCH_LINOP: true 60 | run: | 61 | uv pip install .[tutorials] 62 | - if: ${{ inputs.smoke_test }} 63 | name: Run tutorials with smoke test 64 | run: | 65 | python scripts/run_tutorials.py -p "$(pwd)" -s 66 | - if: ${{ !inputs.smoke_test }} 67 | name: Run tutorials without smoke test 68 | run: | 69 | python scripts/run_tutorials.py -p "$(pwd)" 70 | -------------------------------------------------------------------------------- /.github/workflows/test.yml: -------------------------------------------------------------------------------- 1 | name: Test 2 | 3 | on: 4 | push: 5 | branches: [ main ] 6 | pull_request: 7 | branches: [ main ] 8 | workflow_dispatch: 9 | 10 | jobs: 11 | 12 | tests-and-coverage: 13 | name: Test & Coverage 14 | uses: ./.github/workflows/reusable_test.yml 15 | with: 16 | use_latest_pytorch_gpytorch: true 17 | secrets: inherit 18 | -------------------------------------------------------------------------------- /.github/workflows/test_stable.yml: -------------------------------------------------------------------------------- 1 | name: Test against stable 2 | 3 | on: 4 | workflow_dispatch: 5 | 6 | jobs: 7 | 8 | tests-and-coverage-stable: 9 | name: Test & Coverage 10 | uses: ./.github/workflows/reusable_test.yml 11 | with: 12 | use_latest_pytorch_gpytorch: false 13 | upload_coverage: false 14 | secrets: inherit 15 | 16 | tests-and-coverage-min-req: 17 | name: Tests and coverage min req. torch, gpytorch & linear_operator versions (Python ${{ matrix.python-version }}, ${{ matrix.os }}) 18 | runs-on: ${{ matrix.os }} 19 | strategy: 20 | fail-fast: false 21 | matrix: 22 | os: ["ubuntu-latest", "macos-14"] 23 | python-version: ["3.10", "3.13"] 24 | env: 25 | # `uv pip ...` requires venv by default. This skips that requirement. 26 | UV_SYSTEM_PYTHON: 1 27 | steps: 28 | - uses: actions/checkout@v4 29 | - name: Install uv 30 | uses: astral-sh/setup-uv@v5 31 | - name: Set up Python ${{ matrix.python-version }} 32 | uses: actions/setup-python@v5 33 | with: 34 | python-version: ${{ matrix.python-version }} 35 | - name: Install dependencies 36 | run: | 37 | uv pip install setuptools # Needed for next line on Python 3.13. 38 | python setup.py egg_info 39 | req_txt="botorch.egg-info/requires.txt" 40 | min_torch_version=$(grep '\btorch[>=]=' ${req_txt} | sed 's/[^0-9.]//g') 41 | # The earliest PyTorch version on Python 3.13 available for all OS is 2.6.0. 42 | min_torch_version="$(if ${{ matrix.python-version == '3.13' }}; then echo "2.6.0"; else echo "${min_torch_version}"; fi)" 43 | min_gpytorch_version=$(grep '\bgpytorch[>=]=' ${req_txt} | sed 's/[^0-9.]//g') 44 | min_linear_operator_version=$(grep '\blinear_operator[>=]=' ${req_txt} | sed 's/[^0-9.]//g') 45 | uv pip install "numpy<2" # Numpy >2.0 is not compatible with PyTorch <2.2. 46 | uv pip install "torch==${min_torch_version}" "gpytorch==${min_gpytorch_version}" "linear_operator==${min_linear_operator_version}" 47 | uv pip install .[test] 48 | - name: Unit tests and coverage -- BoTorch 49 | run: | 50 | pytest -ra test/ --cov botorch/ --cov-report term-missing --cov-report xml:botorch_cov.xml 51 | - name: Unit tests and coverage -- BoTorch Community 52 | run: | 53 | pytest -ra test_community/ --cov botorch_community/ --cov-report term-missing --cov-report xml:botorch_community_cov.xml 54 | 55 | run_tutorials_stable: 56 | name: Run tutorials without smoke test on min req. versions of PyTorch & GPyTorch 57 | uses: ./.github/workflows/reusable_tutorials.yml 58 | with: 59 | smoke_test: false 60 | use_stable_pytorch_gpytorch: true 61 | 62 | run_tutorials_stable_smoke_test: 63 | name: Run tutorials with smoke test on min req. versions of PyTorch & GPyTorch 64 | uses: ./.github/workflows/reusable_tutorials.yml 65 | with: 66 | smoke_test: true 67 | use_stable_pytorch_gpytorch: true 68 | -------------------------------------------------------------------------------- /.github/workflows/tutorials_smoke_test.yml: -------------------------------------------------------------------------------- 1 | name: Tutorials with smoke test 2 | 3 | on: 4 | pull_request: 5 | branches: [ main ] 6 | push: 7 | branches: [ main ] 8 | workflow_dispatch: 9 | 10 | 11 | jobs: 12 | run_tutorials_with_smoke_test: 13 | name: Run tutorials with smoke test on latest PyTorch / GPyTorch 14 | uses: ./.github/workflows/reusable_tutorials.yml 15 | with: 16 | smoke_test: true 17 | use_stable_pytorch_gpytorch: false 18 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # Distribution / packaging 7 | .Python 8 | env/ 9 | build/ 10 | develop-eggs/ 11 | dist/ 12 | downloads/ 13 | eggs/ 14 | .eggs/ 15 | lib/ 16 | lib64/ 17 | parts/ 18 | sdist/ 19 | var/ 20 | wheels/ 21 | *.egg-info/ 22 | .installed.cfg 23 | *.egg 24 | 25 | # watchman 26 | # Watchman helper 27 | .watchmanconfig 28 | 29 | # OSX 30 | *.DS_Store 31 | 32 | # Atom plugin files and ctags 33 | .ftpconfig 34 | .ftpconfig.cson 35 | .ftpignore 36 | *.tags 37 | *.tags1 38 | 39 | # PyInstaller 40 | # Usually these files are written by a python script from a template 41 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 42 | *.manifest 43 | *.spec 44 | 45 | # Installer logs 46 | pip-log.txt 47 | pip-delete-this-directory.txt 48 | 49 | # Unit test / coverage reports 50 | htmlcov/ 51 | .tox/ 52 | .coverage 53 | .coverage.* 54 | .cache 55 | nosetests.xml 56 | coverage.xml 57 | *.cover 58 | .hypothesis/ 59 | .pytest_cache/ 60 | 61 | # PyBuilder 62 | target/ 63 | 64 | # Jupyter Notebook 65 | .ipynb_checkpoints 66 | 67 | # pyenv 68 | .python-version 69 | 70 | # dotenv 71 | .env 72 | 73 | # virtualenv 74 | .venv 75 | venv/ 76 | ENV/ 77 | 78 | # mypy 79 | .mypy_cache/ 80 | 81 | # vim 82 | *.swp 83 | 84 | # cython-generated C-code 85 | botorch/qmc/sobol.c* 86 | 87 | # version file 88 | botorch/version.py 89 | 90 | # Sphinx documentation 91 | sphinx/build/ 92 | 93 | # Docusaurus 94 | website/build/ 95 | website/i18n/ 96 | website/node_modules/ 97 | website/.docusaurus/ 98 | 99 | ## Generated for tutorials 100 | website/_tutorials/ 101 | website/static/files/ 102 | docs/tutorials/* 103 | !docs/tutorials/index.mdx 104 | 105 | ## Generated for Sphinx 106 | website/pages/api/ 107 | website/static/js/* 108 | !website/static/js/code_block_buttons.js 109 | website/static/_sphinx-sources/ 110 | -------------------------------------------------------------------------------- /.pre-commit-config.yaml: -------------------------------------------------------------------------------- 1 | repos: 2 | - repo: local 3 | hooks: 4 | - id: check-requirements-versions 5 | name: Check pre-commit formatting versions 6 | entry: python scripts/check_pre_commit_reqs.py 7 | language: python 8 | always_run: true 9 | pass_filenames: false 10 | additional_dependencies: 11 | - PyYAML 12 | 13 | - repo: https://github.com/omnilib/ufmt 14 | rev: v2.8.0 15 | hooks: 16 | - id: ufmt 17 | additional_dependencies: 18 | - black==24.4.2 19 | - usort==1.0.8.post1 20 | - ruff-api==0.1.0 21 | - stdlibs==2024.1.28 22 | args: [format] 23 | 24 | - repo: https://github.com/pycqa/flake8 25 | rev: 7.0.0 26 | hooks: 27 | - id: flake8 28 | additional_dependencies: 29 | - flake8-docstrings 30 | -------------------------------------------------------------------------------- /.readthedocs.yaml: -------------------------------------------------------------------------------- 1 | version: "2" 2 | 3 | build: 4 | os: "ubuntu-22.04" 5 | tools: 6 | python: "3.12" 7 | jobs: 8 | post_install: 9 | # Install latest botorch if not on a released version 10 | - | 11 | tag=$(eval "git name-rev --name-only --tags HEAD") 12 | if [ $tag = "undefined" ]; then 13 | pip install git+https://github.com/cornellius-gp/linear_operator.git 14 | pip install git+https://github.com/cornellius-gp/gpytorch.git 15 | fi 16 | 17 | python: 18 | install: 19 | - method: pip 20 | path: . 21 | extra_requirements: 22 | - dev 23 | 24 | sphinx: 25 | configuration: sphinx/source/conf.py 26 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) Meta Platforms, Inc. and affiliates. 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include LICENSE 2 | recursive-exclude docs * 3 | recursive-exclude scripts * 4 | recursive-exclude sphinx * 5 | recursive-exclude test * 6 | recursive-exclude tutorials * 7 | recursive-exclude website * 8 | -------------------------------------------------------------------------------- /botorch/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | import gpytorch.settings as gp_settings 8 | import linear_operator.settings as linop_settings 9 | from botorch import ( 10 | acquisition, 11 | exceptions, 12 | models, 13 | optim, 14 | posteriors, 15 | settings, 16 | test_functions, 17 | ) 18 | from botorch.cross_validation import batch_cross_validation 19 | from botorch.fit import fit_fully_bayesian_model_nuts, fit_gpytorch_mll 20 | from botorch.generation.gen import ( 21 | gen_candidates_scipy, 22 | gen_candidates_torch, 23 | get_best_candidates, 24 | ) 25 | from botorch.logging import logger 26 | from botorch.utils import manual_seed 27 | 28 | try: 29 | # Marking this as a manual import to avoid autodeps complaints 30 | # due to imports from non-existent file. 31 | # lint-ignore: UnusedImportsRule 32 | from botorch.version import version as __version__ # @manual 33 | except Exception: # pragma: no cover 34 | __version__ = "Unknown" 35 | 36 | logger.info( 37 | "Turning off `fast_computations` in linear operator and increasing " 38 | "`max_cholesky_size` and `max_eager_kernel_size` to 4096, and " 39 | "`cholesky_max_tries` to 6. The approximate computations available in " 40 | "GPyTorch aim to speed up GP training and inference in large data " 41 | "regime but they are generally not robust enough to be used in a BO-loop. " 42 | "See gpytorch.settings & linear_operator.settings for more details." 43 | ) 44 | linop_settings._fast_covar_root_decomposition._default = False 45 | linop_settings._fast_log_prob._default = False 46 | linop_settings._fast_solves._default = False 47 | linop_settings.cholesky_max_tries._global_value = 6 48 | linop_settings.max_cholesky_size._global_value = 4096 49 | gp_settings.max_eager_kernel_size._global_value = 4096 50 | 51 | 52 | __all__ = [ 53 | "acquisition", 54 | "batch_cross_validation", 55 | "exceptions", 56 | "fit_fully_bayesian_model_nuts", 57 | "fit_gpytorch_mll", 58 | "gen_candidates_scipy", 59 | "gen_candidates_torch", 60 | "get_best_candidates", 61 | "manual_seed", 62 | "models", 63 | "optim", 64 | "posteriors", 65 | "settings", 66 | "test_functions", 67 | ] 68 | -------------------------------------------------------------------------------- /botorch/acquisition/multi_objective/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from botorch.acquisition.multi_objective.analytic import ExpectedHypervolumeImprovement 8 | from botorch.acquisition.multi_objective.base import ( 9 | MultiObjectiveAnalyticAcquisitionFunction, 10 | MultiObjectiveMCAcquisitionFunction, 11 | ) 12 | from botorch.acquisition.multi_objective.hypervolume_knowledge_gradient import ( 13 | qHypervolumeKnowledgeGradient, 14 | qMultiFidelityHypervolumeKnowledgeGradient, 15 | ) 16 | from botorch.acquisition.multi_objective.logei import ( 17 | qLogExpectedHypervolumeImprovement, 18 | qLogNoisyExpectedHypervolumeImprovement, 19 | ) 20 | from botorch.acquisition.multi_objective.max_value_entropy_search import ( 21 | qMultiObjectiveMaxValueEntropy, 22 | ) 23 | from botorch.acquisition.multi_objective.monte_carlo import ( 24 | qExpectedHypervolumeImprovement, 25 | qNoisyExpectedHypervolumeImprovement, 26 | ) 27 | from botorch.acquisition.multi_objective.multi_fidelity import MOMF 28 | from botorch.acquisition.multi_objective.objective import ( 29 | IdentityMCMultiOutputObjective, 30 | MCMultiOutputObjective, 31 | WeightedMCMultiOutputObjective, 32 | ) 33 | from botorch.acquisition.multi_objective.utils import ( 34 | get_default_partitioning_alpha, 35 | prune_inferior_points_multi_objective, 36 | ) 37 | 38 | 39 | __all__ = [ 40 | "ExpectedHypervolumeImprovement", 41 | "get_default_partitioning_alpha", 42 | "IdentityMCMultiOutputObjective", 43 | "MCMultiOutputObjective", 44 | "MOMF", 45 | "MultiObjectiveAnalyticAcquisitionFunction", 46 | "MultiObjectiveMCAcquisitionFunction", 47 | "prune_inferior_points_multi_objective", 48 | "qExpectedHypervolumeImprovement", 49 | "qHypervolumeKnowledgeGradient", 50 | "qLogExpectedHypervolumeImprovement", 51 | "qLogNoisyExpectedHypervolumeImprovement", 52 | "qMultiFidelityHypervolumeKnowledgeGradient", 53 | "qMultiObjectiveMaxValueEntropy", 54 | "qNoisyExpectedHypervolumeImprovement", 55 | "WeightedMCMultiOutputObjective", 56 | ] 57 | -------------------------------------------------------------------------------- /botorch/exceptions/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from botorch.exceptions.errors import ( 8 | BotorchError, 9 | BotorchTensorDimensionError, 10 | CandidateGenerationError, 11 | InputDataError, 12 | ModelFittingError, 13 | OptimizationTimeoutError, 14 | UnsupportedError, 15 | ) 16 | from botorch.exceptions.warnings import ( 17 | BadInitialCandidatesWarning, 18 | BotorchTensorDimensionWarning, 19 | BotorchWarning, 20 | CostAwareWarning, 21 | InputDataWarning, 22 | NumericsWarning, 23 | OptimizationWarning, 24 | SamplingWarning, 25 | UserInputWarning, 26 | ) 27 | 28 | 29 | __all__ = [ 30 | "BadInitialCandidatesWarning", 31 | "BotorchError", 32 | "BotorchTensorDimensionError", 33 | "BotorchTensorDimensionWarning", 34 | "BotorchWarning", 35 | "CostAwareWarning", 36 | "InputDataWarning", 37 | "InputDataError", 38 | "BadInitialCandidatesWarning", 39 | "CandidateGenerationError", 40 | "ModelFittingError", 41 | "NumericsWarning", 42 | "OptimizationTimeoutError", 43 | "OptimizationWarning", 44 | "SamplingWarning", 45 | "UnsupportedError", 46 | "UserInputWarning", 47 | ] 48 | -------------------------------------------------------------------------------- /botorch/exceptions/errors.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | r""" 8 | Botorch Errors. 9 | """ 10 | 11 | from typing import Any 12 | 13 | import numpy.typing as npt 14 | 15 | 16 | class BotorchError(Exception): 17 | r"""Base botorch exception.""" 18 | 19 | pass 20 | 21 | 22 | class CandidateGenerationError(BotorchError): 23 | r"""Exception raised during generating candidates.""" 24 | 25 | pass 26 | 27 | 28 | class DeprecationError(BotorchError): 29 | r"""Exception raised due to deprecations""" 30 | 31 | pass 32 | 33 | 34 | class InputDataError(BotorchError): 35 | r"""Exception raised when input data does not comply with conventions.""" 36 | 37 | pass 38 | 39 | 40 | class UnsupportedError(BotorchError): 41 | r"""Currently unsupported feature.""" 42 | 43 | pass 44 | 45 | 46 | class BotorchTensorDimensionError(BotorchError): 47 | r"""Exception raised when a tensor violates a botorch convention.""" 48 | 49 | pass 50 | 51 | 52 | class ModelFittingError(Exception): 53 | r"""Exception raised when attempts to fit a model terminate unsuccessfully.""" 54 | 55 | pass 56 | 57 | 58 | class OptimizationTimeoutError(BotorchError): 59 | r"""Exception raised when optimization times out.""" 60 | 61 | def __init__( 62 | self, /, *args: Any, current_x: npt.NDArray, runtime: float, **kwargs: Any 63 | ) -> None: 64 | r""" 65 | Args: 66 | *args: Standard args to `BoTorchError`. 67 | current_x: A numpy array representing the current iterate. 68 | runtime: The total runtime in seconds after which the optimization 69 | timed out. 70 | **kwargs: Standard kwargs to `BoTorchError`. 71 | """ 72 | super().__init__(*args, **kwargs) 73 | self.current_x = current_x 74 | self.runtime = runtime 75 | 76 | 77 | class OptimizationGradientError(BotorchError, RuntimeError): 78 | r"""Exception raised when gradient array `gradf` containts NaNs.""" 79 | 80 | def __init__(self, /, *args: Any, current_x: npt.NDArray, **kwargs: Any) -> None: 81 | r""" 82 | Args: 83 | *args: Standard args to `BoTorchError`. 84 | current_x: A numpy array representing the current iterate. 85 | **kwargs: Standard kwargs to `BoTorchError`. 86 | """ 87 | super().__init__(*args, **kwargs) 88 | self.current_x = current_x 89 | 90 | 91 | class InfeasibilityError(BotorchError, ValueError): 92 | r"""Exception raised when infeasibility occurs.""" 93 | 94 | pass 95 | -------------------------------------------------------------------------------- /botorch/generation/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | 8 | from botorch.generation.gen import ( 9 | gen_candidates_scipy, 10 | gen_candidates_torch, 11 | get_best_candidates, 12 | ) 13 | from botorch.generation.sampling import BoltzmannSampling, MaxPosteriorSampling 14 | 15 | 16 | __all__ = [ 17 | "gen_candidates_scipy", 18 | "gen_candidates_torch", 19 | "get_best_candidates", 20 | "BoltzmannSampling", 21 | "MaxPosteriorSampling", 22 | ] 23 | -------------------------------------------------------------------------------- /botorch/logging.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | import logging 8 | 9 | import torch 10 | 11 | LOG_LEVEL_DEFAULT = logging.CRITICAL 12 | 13 | 14 | def _get_logger( 15 | name: str = "botorch", level: int = LOG_LEVEL_DEFAULT 16 | ) -> logging.Logger: 17 | """Gets a default botorch logger 18 | 19 | Logging level can be tuned via botorch.setting.log_level 20 | 21 | Args: 22 | name: Name for logger instance 23 | level: Logging threshhold for the given logger. Logs of greater or 24 | equal severity will be printed to STDERR 25 | """ 26 | logger = logging.getLogger(name) 27 | logger.setLevel(level) 28 | # Add timestamps to log messages. 29 | console = logging.StreamHandler() 30 | formatter = logging.Formatter( 31 | fmt="[%(levelname)s %(asctime)s] %(name)s: %(message)s", 32 | datefmt="%m-%d %H:%M:%S", 33 | ) 34 | console.setFormatter(formatter) 35 | logger.addHandler(console) 36 | logger.propagate = False 37 | return logger 38 | 39 | 40 | def shape_to_str(shape: torch.Size) -> str: 41 | return f"`{' x '.join(str(i) for i in shape)}`" 42 | 43 | 44 | logger = _get_logger() 45 | -------------------------------------------------------------------------------- /botorch/models/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from botorch.models.approximate_gp import ( 8 | ApproximateGPyTorchModel, 9 | SingleTaskVariationalGP, 10 | ) 11 | from botorch.models.cost import AffineFidelityCostModel 12 | from botorch.models.deterministic import ( 13 | AffineDeterministicModel, 14 | GenericDeterministicModel, 15 | PosteriorMeanModel, 16 | ) 17 | from botorch.models.fully_bayesian import SaasFullyBayesianSingleTaskGP 18 | from botorch.models.fully_bayesian_multitask import SaasFullyBayesianMultiTaskGP 19 | 20 | from botorch.models.gp_regression import SingleTaskGP 21 | from botorch.models.gp_regression_fidelity import SingleTaskMultiFidelityGP 22 | from botorch.models.gp_regression_mixed import MixedSingleTaskGP 23 | from botorch.models.higher_order_gp import HigherOrderGP 24 | 25 | from botorch.models.map_saas import add_saas_prior, AdditiveMapSaasSingleTaskGP 26 | from botorch.models.model import ModelList 27 | from botorch.models.model_list_gp_regression import ModelListGP 28 | from botorch.models.multitask import KroneckerMultiTaskGP, MultiTaskGP 29 | from botorch.models.pairwise_gp import PairwiseGP, PairwiseLaplaceMarginalLogLikelihood 30 | 31 | __all__ = [ 32 | "add_saas_prior", 33 | "AdditiveMapSaasSingleTaskGP", 34 | "AffineDeterministicModel", 35 | "AffineFidelityCostModel", 36 | "ApproximateGPyTorchModel", 37 | "SaasFullyBayesianSingleTaskGP", 38 | "SaasFullyBayesianMultiTaskGP", 39 | "GenericDeterministicModel", 40 | "HigherOrderGP", 41 | "KroneckerMultiTaskGP", 42 | "MixedSingleTaskGP", 43 | "ModelList", 44 | "ModelListGP", 45 | "MultiTaskGP", 46 | "PairwiseGP", 47 | "PairwiseLaplaceMarginalLogLikelihood", 48 | "PosteriorMeanModel", 49 | "SingleTaskGP", 50 | "SingleTaskMultiFidelityGP", 51 | "SingleTaskVariationalGP", 52 | ] 53 | -------------------------------------------------------------------------------- /botorch/models/kernels/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from botorch.models.kernels.categorical import CategoricalKernel 8 | from botorch.models.kernels.downsampling import DownsamplingKernel 9 | from botorch.models.kernels.exponential_decay import ExponentialDecayKernel 10 | from botorch.models.kernels.infinite_width_bnn import InfiniteWidthBNNKernel 11 | from botorch.models.kernels.linear_truncated_fidelity import ( 12 | LinearTruncatedFidelityKernel, 13 | ) 14 | 15 | 16 | __all__ = [ 17 | "CategoricalKernel", 18 | "DownsamplingKernel", 19 | "ExponentialDecayKernel", 20 | "InfiniteWidthBNNKernel", 21 | "LinearTruncatedFidelityKernel", 22 | ] 23 | -------------------------------------------------------------------------------- /botorch/models/kernels/categorical.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | import torch 8 | from gpytorch.kernels.kernel import Kernel 9 | from torch import Tensor 10 | 11 | 12 | class CategoricalKernel(Kernel): 13 | r"""A Kernel for categorical features. 14 | 15 | Computes `exp(-dist(x1, x2) / lengthscale)`, where 16 | `dist(x1, x2)` is zero if `x1 == x2` and one if `x1 != x2`. 17 | If the last dimension is not a batch dimension, then the 18 | mean is considered. 19 | 20 | Note: This kernel is NOT differentiable w.r.t. the inputs. 21 | """ 22 | 23 | has_lengthscale = True 24 | 25 | def forward( 26 | self, 27 | x1: Tensor, 28 | x2: Tensor, 29 | diag: bool = False, 30 | last_dim_is_batch: bool = False, 31 | ) -> Tensor: 32 | delta = x1.unsqueeze(-2) != x2.unsqueeze(-3) 33 | dists = delta / self.lengthscale.unsqueeze(-2) 34 | if last_dim_is_batch: 35 | dists = dists.transpose(-3, -1) 36 | else: 37 | dists = dists.mean(-1) 38 | res = torch.exp(-dists) 39 | if diag: 40 | res = torch.diagonal(res, dim1=-1, dim2=-2) 41 | return res 42 | -------------------------------------------------------------------------------- /botorch/models/likelihoods/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from botorch.models.likelihoods.pairwise import ( 8 | PairwiseLogitLikelihood, 9 | PairwiseProbitLikelihood, 10 | ) 11 | 12 | 13 | __all__ = [ 14 | "PairwiseProbitLikelihood", 15 | "PairwiseLogitLikelihood", 16 | ] 17 | -------------------------------------------------------------------------------- /botorch/models/transforms/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from botorch.models.transforms.factory import get_rounding_input_transform 8 | from botorch.models.transforms.input import ( 9 | ChainedInputTransform, 10 | Normalize, 11 | Round, 12 | Warp, 13 | ) 14 | from botorch.models.transforms.outcome import ( 15 | Bilog, 16 | ChainedOutcomeTransform, 17 | Log, 18 | Power, 19 | Standardize, 20 | ) 21 | 22 | 23 | __all__ = [ 24 | "get_rounding_input_transform", 25 | "Bilog", 26 | "ChainedInputTransform", 27 | "ChainedOutcomeTransform", 28 | "Log", 29 | "Normalize", 30 | "Power", 31 | "Round", 32 | "Standardize", 33 | "Warp", 34 | ] 35 | -------------------------------------------------------------------------------- /botorch/models/utils/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from botorch.models.utils.assorted import ( 8 | _make_X_full, 9 | add_output_dim, 10 | check_min_max_scaling, 11 | check_no_nans, 12 | check_standardization, 13 | consolidate_duplicates, 14 | detect_duplicates, 15 | fantasize, 16 | gpt_posterior_settings, 17 | mod_batch_shape, 18 | multioutput_to_batch_mode_transform, 19 | validate_input_scaling, 20 | ) 21 | 22 | 23 | __all__ = [ 24 | "_make_X_full", 25 | "add_output_dim", 26 | "check_no_nans", 27 | "check_min_max_scaling", 28 | "check_standardization", 29 | "fantasize", 30 | "gpt_posterior_settings", 31 | "multioutput_to_batch_mode_transform", 32 | "mod_batch_shape", 33 | "validate_input_scaling", 34 | "detect_duplicates", 35 | "consolidate_duplicates", 36 | ] 37 | -------------------------------------------------------------------------------- /botorch/optim/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from botorch.optim.closures import ( 8 | ForwardBackwardClosure, 9 | get_loss_closure, 10 | get_loss_closure_with_grads, 11 | ) 12 | from botorch.optim.core import ( 13 | OptimizationResult, 14 | OptimizationStatus, 15 | scipy_minimize, 16 | torch_minimize, 17 | ) 18 | from botorch.optim.homotopy import ( 19 | FixedHomotopySchedule, 20 | Homotopy, 21 | HomotopyParameter, 22 | LinearHomotopySchedule, 23 | LogLinearHomotopySchedule, 24 | ) 25 | from botorch.optim.initializers import ( 26 | initialize_q_batch, 27 | initialize_q_batch_nonneg, 28 | initialize_q_batch_topn, 29 | ) 30 | from botorch.optim.optimize import ( 31 | gen_batch_initial_conditions, 32 | optimize_acqf, 33 | optimize_acqf_cyclic, 34 | optimize_acqf_discrete, 35 | optimize_acqf_discrete_local_search, 36 | optimize_acqf_mixed, 37 | ) 38 | from botorch.optim.optimize_homotopy import optimize_acqf_homotopy 39 | from botorch.optim.optimize_mixed import optimize_acqf_mixed_alternating 40 | from botorch.optim.stopping import ExpMAStoppingCriterion 41 | 42 | 43 | __all__ = [ 44 | "ForwardBackwardClosure", 45 | "get_loss_closure", 46 | "get_loss_closure_with_grads", 47 | "gen_batch_initial_conditions", 48 | "initialize_q_batch", 49 | "initialize_q_batch_nonneg", 50 | "initialize_q_batch_topn", 51 | "OptimizationResult", 52 | "OptimizationStatus", 53 | "optimize_acqf", 54 | "optimize_acqf_cyclic", 55 | "optimize_acqf_discrete", 56 | "optimize_acqf_discrete_local_search", 57 | "optimize_acqf_mixed", 58 | "optimize_acqf_mixed_alternating", 59 | "optimize_acqf_homotopy", 60 | "scipy_minimize", 61 | "torch_minimize", 62 | "ExpMAStoppingCriterion", 63 | "FixedHomotopySchedule", 64 | "Homotopy", 65 | "HomotopyParameter", 66 | "LinearHomotopySchedule", 67 | "LogLinearHomotopySchedule", 68 | ] 69 | -------------------------------------------------------------------------------- /botorch/optim/closures/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from botorch.optim.closures.core import ( 8 | ForwardBackwardClosure, 9 | NdarrayOptimizationClosure, 10 | ) 11 | from botorch.optim.closures.model_closures import ( 12 | get_loss_closure, 13 | get_loss_closure_with_grads, 14 | ) 15 | 16 | 17 | __all__ = [ 18 | "ForwardBackwardClosure", 19 | "get_loss_closure", 20 | "get_loss_closure_with_grads", 21 | "NdarrayOptimizationClosure", 22 | ] 23 | -------------------------------------------------------------------------------- /botorch/optim/utils/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from botorch.optim.utils.acquisition_utils import ( 8 | columnwise_clamp, 9 | fix_features, 10 | get_X_baseline, 11 | ) 12 | from botorch.optim.utils.common import ( 13 | _handle_numerical_errors, 14 | _warning_handler_template, 15 | ) 16 | from botorch.optim.utils.model_utils import ( 17 | get_data_loader, 18 | get_name_filter, 19 | get_parameters, 20 | get_parameters_and_bounds, 21 | sample_all_priors, 22 | TorchAttr, 23 | ) 24 | from botorch.optim.utils.numpy_utils import ( 25 | as_ndarray, 26 | get_bounds_as_ndarray, 27 | get_tensors_as_ndarray_1d, 28 | set_tensors_from_ndarray_1d, 29 | ) 30 | from botorch.optim.utils.timeout import minimize_with_timeout 31 | 32 | __all__ = [ 33 | "_handle_numerical_errors", 34 | "_warning_handler_template", 35 | "as_ndarray", 36 | "columnwise_clamp", 37 | "fix_features", 38 | "get_name_filter", 39 | "get_bounds_as_ndarray", 40 | "get_data_loader", 41 | "get_parameters", 42 | "get_parameters_and_bounds", 43 | "get_tensors_as_ndarray_1d", 44 | "get_X_baseline", 45 | "minimize_with_timeout", 46 | "sample_all_priors", 47 | "set_tensors_from_ndarray_1d", 48 | "TorchAttr", 49 | ] 50 | -------------------------------------------------------------------------------- /botorch/optim/utils/common.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | r"""General-purpose optimization utilities.""" 8 | 9 | from __future__ import annotations 10 | 11 | from collections.abc import Callable 12 | 13 | from logging import debug as logging_debug 14 | from warnings import warn_explicit, WarningMessage 15 | 16 | import numpy as np 17 | import numpy.typing as npt 18 | from linear_operator.utils.errors import NanError, NotPSDError 19 | 20 | 21 | def _handle_numerical_errors( 22 | error: RuntimeError, x: npt.NDArray, dtype: np.dtype | None = None 23 | ) -> tuple[npt.NDArray, npt.NDArray]: 24 | if isinstance(error, NotPSDError): 25 | raise error 26 | error_message = error.args[0] if len(error.args) > 0 else "" 27 | if ( 28 | isinstance(error, NanError) 29 | or "singular" in error_message # old pytorch message 30 | or "input is not positive-definite" in error_message # since pytorch #63864 31 | ): 32 | _dtype = x.dtype if dtype is None else dtype 33 | return np.full((), "nan", dtype=_dtype), np.full_like(x, "nan", dtype=_dtype) 34 | raise error # pragma: nocover 35 | 36 | 37 | def _warning_handler_template( 38 | w: WarningMessage, 39 | debug: Callable[[WarningMessage], bool] | None = None, 40 | rethrow: Callable[[WarningMessage], bool] | None = None, 41 | ) -> bool: 42 | r"""Helper for making basic warning handlers. Typically used with functools.partial. 43 | 44 | Args: 45 | w: The WarningMessage to be resolved and filtered out or returned unresolved. 46 | debug: Optional callable used to specify that a warning should be 47 | resolved as a logging statement at the DEBUG level. 48 | rethrow: Optional callable used to specify that a warning should be 49 | resolved by rethrowing the warning. 50 | 51 | Returns: 52 | Boolean indicating whether or not the warning message was resolved. 53 | """ 54 | if debug and debug(w): 55 | logging_debug(str(w.message)) 56 | return True 57 | 58 | if rethrow and rethrow(w): 59 | warn_explicit(str(w.message), w.category, w.filename, w.lineno) 60 | return True 61 | 62 | return False 63 | -------------------------------------------------------------------------------- /botorch/posteriors/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from botorch.posteriors.fully_bayesian import ( 8 | FullyBayesianPosterior, 9 | GaussianMixturePosterior, 10 | ) 11 | from botorch.posteriors.gpytorch import GPyTorchPosterior 12 | from botorch.posteriors.higher_order import HigherOrderGPPosterior 13 | from botorch.posteriors.multitask import MultitaskGPPosterior 14 | from botorch.posteriors.posterior import Posterior 15 | from botorch.posteriors.posterior_list import PosteriorList 16 | from botorch.posteriors.torch import TorchPosterior 17 | from botorch.posteriors.transformed import TransformedPosterior 18 | 19 | __all__ = [ 20 | "GaussianMixturePosterior", 21 | "FullyBayesianPosterior", 22 | "GPyTorchPosterior", 23 | "HigherOrderGPPosterior", 24 | "MultitaskGPPosterior", 25 | "Posterior", 26 | "PosteriorList", 27 | "TorchPosterior", 28 | "TransformedPosterior", 29 | ] 30 | -------------------------------------------------------------------------------- /botorch/posteriors/base_samples.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from __future__ import annotations 8 | 9 | import torch 10 | from gpytorch.distributions.multitask_multivariate_normal import ( 11 | MultitaskMultivariateNormal, 12 | ) 13 | from torch import Tensor 14 | 15 | 16 | def _reshape_base_samples_non_interleaved( 17 | mvn: MultitaskMultivariateNormal, base_samples: Tensor, sample_shape: torch.Size 18 | ) -> Tensor: 19 | r"""Reshape base samples to account for non-interleaved MT-MVNs. 20 | 21 | This method is important for making sure that the `n`th base sample 22 | only effects the posterior sample for the `p`th point if `p >= n`. 23 | Without this reshaping, for M>=2, the posterior samples for all `n` 24 | points would be affected. 25 | 26 | Args: 27 | mvn: A MultitaskMultivariateNormal distribution. 28 | base_samples: A `sample_shape x `batch_shape` x n x m`-dim 29 | tensor of base_samples. 30 | sample_shape: The sample shape. 31 | 32 | Returns: 33 | A `sample_shape x `batch_shape` x n x m`-dim tensor of 34 | base_samples suitable for a non-interleaved-multi-task 35 | or single-task covariance matrix. 36 | """ 37 | if not mvn._interleaved: 38 | new_shape = sample_shape + mvn._output_shape[:-2] + mvn._output_shape[:-3:-1] 39 | base_samples = ( 40 | base_samples.transpose(-1, -2) 41 | .view(new_shape) 42 | .reshape(sample_shape + mvn.loc.shape) 43 | .view(base_samples.shape) 44 | ) 45 | return base_samples 46 | -------------------------------------------------------------------------------- /botorch/sampling/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from botorch.sampling.base import MCSampler 8 | from botorch.sampling.get_sampler import get_sampler 9 | from botorch.sampling.list_sampler import ListSampler 10 | from botorch.sampling.normal import IIDNormalSampler, SobolQMCNormalSampler 11 | from botorch.sampling.pairwise_samplers import ( 12 | PairwiseIIDNormalSampler, 13 | PairwiseMCSampler, 14 | PairwiseSobolQMCNormalSampler, 15 | ) 16 | from botorch.sampling.qmc import MultivariateNormalQMCEngine, NormalQMCEngine 17 | from botorch.sampling.stochastic_samplers import ForkedRNGSampler, StochasticSampler 18 | from torch.quasirandom import SobolEngine 19 | 20 | 21 | __all__ = [ 22 | "ForkedRNGSampler", 23 | "get_sampler", 24 | "IIDNormalSampler", 25 | "ListSampler", 26 | "MCSampler", 27 | "MultivariateNormalQMCEngine", 28 | "NormalQMCEngine", 29 | "PairwiseIIDNormalSampler", 30 | "PairwiseMCSampler", 31 | "PairwiseSobolQMCNormalSampler", 32 | "SobolEngine", 33 | "SobolQMCNormalSampler", 34 | "StochasticSampler", 35 | ] 36 | -------------------------------------------------------------------------------- /botorch/sampling/index_sampler.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | r""" 8 | Sampler to be used with `EnsemblePosteriors` to enable 9 | deterministic optimization of acquisition functions with ensemble models. 10 | """ 11 | 12 | from __future__ import annotations 13 | 14 | import torch 15 | from botorch.posteriors.ensemble import EnsemblePosterior 16 | from botorch.sampling.base import MCSampler 17 | from torch import Tensor 18 | 19 | 20 | class IndexSampler(MCSampler): 21 | r"""A sampler that calls `posterior.rsample_from_base_samples` to 22 | generate the samples via index base samples.""" 23 | 24 | def forward(self, posterior: EnsemblePosterior) -> Tensor: 25 | r"""Draws MC samples from the posterior. 26 | 27 | Args: 28 | posterior: The ensemble posterior to sample from. 29 | 30 | Returns: 31 | The samples drawn from the posterior. 32 | """ 33 | self._construct_base_samples(posterior=posterior) 34 | samples = posterior.rsample_from_base_samples( 35 | sample_shape=self.sample_shape, base_samples=self.base_samples 36 | ) 37 | return samples 38 | 39 | def _construct_base_samples(self, posterior: EnsemblePosterior) -> None: 40 | r"""Constructs base samples as indices to sample with them from 41 | the Posterior. 42 | 43 | Args: 44 | posterior: The ensemble posterior to construct the base samples 45 | for. 46 | """ 47 | if self.base_samples is None or self.base_samples.shape != self.sample_shape: 48 | with torch.random.fork_rng(): 49 | torch.manual_seed(self.seed) 50 | base_samples = torch.multinomial( 51 | posterior.weights, 52 | num_samples=self.sample_shape.numel(), 53 | replacement=True, 54 | ).reshape(self.sample_shape) 55 | self.register_buffer("base_samples", base_samples) 56 | if self.base_samples.device != posterior.device: 57 | self.to(device=posterior.device) # pragma: nocover 58 | 59 | def _update_base_samples( 60 | self, posterior: EnsemblePosterior, base_sampler: IndexSampler 61 | ) -> None: 62 | r"""Null operation just needed for compatibility with 63 | `CachedCholeskyAcquisitionFunction`.""" 64 | pass 65 | -------------------------------------------------------------------------------- /botorch/sampling/list_sampler.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | r""" 8 | A `SamplerList` for sampling from a `PosteriorList`. 9 | """ 10 | 11 | from __future__ import annotations 12 | 13 | import torch 14 | from botorch.exceptions.errors import UnsupportedError 15 | from botorch.posteriors.posterior_list import PosteriorList 16 | from botorch.sampling.base import MCSampler 17 | from torch import Tensor 18 | from torch.nn import ModuleList 19 | 20 | 21 | class ListSampler(MCSampler): 22 | def __init__(self, *samplers: MCSampler) -> None: 23 | r"""A list of samplers for sampling from a `PosteriorList`. 24 | 25 | Args: 26 | samplers: A variable number of samplers. This should include 27 | a sampler for each posterior. 28 | """ 29 | super(MCSampler, self).__init__() 30 | self.samplers = ModuleList(samplers) 31 | self._validate_samplers() 32 | 33 | def _validate_samplers(self) -> None: 34 | r"""Checks that the samplers share the same sample shape.""" 35 | sample_shapes = [s.sample_shape for s in self.samplers] 36 | if not all(sample_shapes[0] == ss for ss in sample_shapes): 37 | raise UnsupportedError( 38 | "ListSampler requires all samplers to have the same sample shape." 39 | ) 40 | 41 | @property 42 | def sample_shape(self) -> torch.Size: 43 | r"""The sample shape of the underlying samplers.""" 44 | self._validate_samplers() 45 | return self.samplers[0].sample_shape 46 | 47 | def forward(self, posterior: PosteriorList) -> Tensor: 48 | r"""Samples from the posteriors and concatenates the samples. 49 | 50 | Args: 51 | posterior: A `PosteriorList` to sample from. 52 | 53 | Returns: 54 | The samples drawn from the posterior. 55 | """ 56 | samples_list = [ 57 | s(posterior=p) for s, p in zip(self.samplers, posterior.posteriors) 58 | ] 59 | return posterior._reshape_and_cat(tensors=samples_list) 60 | 61 | def _update_base_samples( 62 | self, posterior: PosteriorList, base_sampler: ListSampler 63 | ) -> None: 64 | r"""Update the sampler to use the original base samples for X_baseline. 65 | 66 | This is used in CachedCholeskyAcquisitionFunctions to ensure consistency. 67 | 68 | Args: 69 | posterior: The posterior for which the base samples are constructed. 70 | base_sampler: The base sampler to retrieve the base samples from. 71 | """ 72 | self._instance_check(base_sampler=base_sampler) 73 | for s, p, bs in zip(self.samplers, posterior.posteriors, base_sampler.samplers): 74 | s._update_base_samples(posterior=p, base_sampler=bs) 75 | -------------------------------------------------------------------------------- /botorch/sampling/pathwise/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | 8 | from botorch.sampling.pathwise.features import ( 9 | gen_kernel_features, 10 | KernelEvaluationMap, 11 | KernelFeatureMap, 12 | ) 13 | from botorch.sampling.pathwise.paths import ( 14 | GeneralizedLinearPath, 15 | PathDict, 16 | PathList, 17 | SamplePath, 18 | ) 19 | from botorch.sampling.pathwise.posterior_samplers import ( 20 | draw_matheron_paths, 21 | get_matheron_path_model, 22 | MatheronPath, 23 | ) 24 | from botorch.sampling.pathwise.prior_samplers import draw_kernel_feature_paths 25 | from botorch.sampling.pathwise.update_strategies import gaussian_update 26 | 27 | 28 | __all__ = [ 29 | "draw_matheron_paths", 30 | "draw_kernel_feature_paths", 31 | "gen_kernel_features", 32 | "get_matheron_path_model", 33 | "gaussian_update", 34 | "GeneralizedLinearPath", 35 | "KernelEvaluationMap", 36 | "KernelFeatureMap", 37 | "MatheronPath", 38 | "SamplePath", 39 | "PathDict", 40 | "PathList", 41 | ] 42 | -------------------------------------------------------------------------------- /botorch/sampling/pathwise/features/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | 8 | from botorch.sampling.pathwise.features.generators import gen_kernel_features 9 | from botorch.sampling.pathwise.features.maps import ( 10 | FeatureMap, 11 | KernelEvaluationMap, 12 | KernelFeatureMap, 13 | ) 14 | 15 | __all__ = [ 16 | "FeatureMap", 17 | "gen_kernel_features", 18 | "KernelEvaluationMap", 19 | "KernelFeatureMap", 20 | ] 21 | -------------------------------------------------------------------------------- /botorch/sampling/stochastic_samplers.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | r""" 8 | Samplers to enable use cases that are not base sample driven, such as 9 | stochastic optimization of acquisition functions. 10 | """ 11 | 12 | from __future__ import annotations 13 | 14 | import torch 15 | from botorch.posteriors import Posterior 16 | from botorch.sampling.base import MCSampler 17 | from torch import Tensor 18 | 19 | 20 | class ForkedRNGSampler(MCSampler): 21 | r"""A sampler using `torch.fork_rng` to enable replicable sampling 22 | from a posterior that does not support base samples. 23 | 24 | NOTE: This approach is not a one-to-one replacement for base sample 25 | driven sampling. The main missing piece in this approach is that its 26 | outputs are not replicable across the batch dimensions. As a result, 27 | when an acquisition function is batch evaluated with repeated candidates, 28 | each candidate will produce a different acquisition value, which is not 29 | compatible with Sample Average Approximation. 30 | """ 31 | 32 | def forward(self, posterior: Posterior) -> Tensor: 33 | r"""Draws MC samples from the posterior in a `fork_rng` context. 34 | 35 | Args: 36 | posterior: The posterior to sample from. 37 | 38 | Returns: 39 | The samples drawn from the posterior. 40 | """ 41 | with torch.random.fork_rng(): 42 | torch.manual_seed(self.seed) 43 | return posterior.rsample(sample_shape=self.sample_shape) 44 | 45 | 46 | class StochasticSampler(MCSampler): 47 | r"""A sampler that simply calls `posterior.rsample` to generate the 48 | samples. This should only be used for stochastic optimization of the 49 | acquisition functions, e.g., via `gen_candidates_torch`. This should 50 | not be used with `optimize_acqf`, which uses deterministic optimizers 51 | under the hood. 52 | 53 | NOTE: This ignores the `seed` option. 54 | """ 55 | 56 | def forward(self, posterior: Posterior) -> Tensor: 57 | r"""Draws MC samples from the posterior. 58 | 59 | Args: 60 | posterior: The posterior to sample from. 61 | 62 | Returns: 63 | The samples drawn from the posterior. 64 | """ 65 | return posterior.rsample(sample_shape=self.sample_shape) 66 | -------------------------------------------------------------------------------- /botorch/settings.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | r""" 8 | BoTorch settings. 9 | """ 10 | 11 | from __future__ import annotations 12 | 13 | from botorch.logging import LOG_LEVEL_DEFAULT, logger 14 | 15 | 16 | class _Flag: 17 | r"""Base class for context managers for a binary setting.""" 18 | 19 | _state: bool = False 20 | 21 | @classmethod 22 | def on(cls) -> bool: 23 | return cls._state 24 | 25 | @classmethod 26 | def off(cls) -> bool: 27 | return not cls._state 28 | 29 | @classmethod 30 | def _set_state(cls, state: bool) -> None: 31 | cls._state = state 32 | 33 | def __init__(self, state: bool = True) -> None: 34 | self.prev = self.__class__.on() 35 | self.state = state 36 | 37 | def __enter__(self) -> None: 38 | self.__class__._set_state(self.state) 39 | 40 | def __exit__(self, *args) -> None: 41 | self.__class__._set_state(self.prev) 42 | 43 | 44 | class propagate_grads(_Flag): 45 | r"""Flag for propagating gradients to model training inputs / training data. 46 | 47 | When set to `True`, gradients will be propagated to the training inputs. 48 | This is useful in particular for propating gradients through fantasy models. 49 | """ 50 | 51 | _state: bool = False 52 | 53 | 54 | class validate_input_scaling(_Flag): 55 | r"""Flag for validating input normalization/standardization. 56 | 57 | When set to `True`, standard botorch models will validate (up to reasonable 58 | tolerance) that 59 | (i) none of the inputs contain NaN values 60 | (ii) the training data (`train_X`) is normalized to the unit cube 61 | (iii) the training targets (`train_Y`) are standardized (zero mean, unit var) 62 | No checks (other than the NaN check) are performed for observed variances 63 | (`train_Y_var`) at this point. 64 | """ 65 | 66 | _state: bool = True 67 | 68 | 69 | class log_level: 70 | r"""Flag for printing verbose logging statements. 71 | 72 | Applies the given level to logging.getLogger('botorch') calls. For 73 | instance, when set to logging.INFO, all logger calls of level INFO or 74 | above will be printed to STDERR 75 | """ 76 | 77 | level: int = LOG_LEVEL_DEFAULT 78 | 79 | @classmethod 80 | def _set_level(cls, level: int) -> None: 81 | cls.level = level 82 | logger.setLevel(level) 83 | 84 | def __init__(self, level: int = LOG_LEVEL_DEFAULT) -> None: 85 | r""" 86 | Args: 87 | level: The log level. Defaults to LOG_LEVEL_DEFAULT. 88 | """ 89 | self.prev = self.__class__.level 90 | self.level = level 91 | 92 | def __enter__(self) -> None: 93 | self.__class__._set_level(self.level) 94 | 95 | def __exit__(self, *args) -> None: 96 | self.__class__._set_level(self.prev) 97 | -------------------------------------------------------------------------------- /botorch/test_functions/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from botorch.test_functions.multi_fidelity import ( 8 | AugmentedBranin, 9 | AugmentedHartmann, 10 | AugmentedRosenbrock, 11 | ) 12 | from botorch.test_functions.multi_objective import ( 13 | BNH, 14 | BraninCurrin, 15 | C2DTLZ2, 16 | CarSideImpact, 17 | CONSTR, 18 | ConstrainedBraninCurrin, 19 | DiscBrake, 20 | DTLZ1, 21 | DTLZ2, 22 | DTLZ3, 23 | DTLZ4, 24 | DTLZ5, 25 | DTLZ7, 26 | GMM, 27 | MW7, 28 | OSY, 29 | Penicillin, 30 | SRN, 31 | ToyRobust, 32 | VehicleSafety, 33 | WeldedBeam, 34 | ZDT1, 35 | ZDT2, 36 | ZDT3, 37 | ) 38 | from botorch.test_functions.multi_objective_multi_fidelity import ( 39 | MOMFBraninCurrin, 40 | MOMFPark, 41 | ) 42 | from botorch.test_functions.synthetic import ( 43 | Ackley, 44 | Beale, 45 | Branin, 46 | Bukin, 47 | Cosine8, 48 | DixonPrice, 49 | DropWave, 50 | EggHolder, 51 | Griewank, 52 | Hartmann, 53 | HolderTable, 54 | Levy, 55 | Michalewicz, 56 | Powell, 57 | PressureVessel, 58 | Rastrigin, 59 | Rosenbrock, 60 | Shekel, 61 | SixHumpCamel, 62 | SpeedReducer, 63 | StyblinskiTang, 64 | SyntheticTestFunction, 65 | TensionCompressionString, 66 | ThreeHumpCamel, 67 | WeldedBeamSO, 68 | ) 69 | 70 | 71 | __all__ = [ 72 | "Ackley", 73 | "AugmentedBranin", 74 | "AugmentedHartmann", 75 | "AugmentedRosenbrock", 76 | "Beale", 77 | "BNH", 78 | "Branin", 79 | "BraninCurrin", 80 | "Bukin", 81 | "CONSTR", 82 | "Cosine8", 83 | "CarSideImpact", 84 | "ConstrainedBraninCurrin", 85 | "C2DTLZ2", 86 | "DiscBrake", 87 | "DixonPrice", 88 | "DropWave", 89 | "DTLZ1", 90 | "DTLZ2", 91 | "DTLZ3", 92 | "DTLZ4", 93 | "DTLZ5", 94 | "DTLZ7", 95 | "EggHolder", 96 | "GMM", 97 | "Griewank", 98 | "Hartmann", 99 | "HolderTable", 100 | "Levy", 101 | "Michalewicz", 102 | "MW7", 103 | "OSY", 104 | "Penicillin", 105 | "Powell", 106 | "PressureVessel", 107 | "Rastrigin", 108 | "Rosenbrock", 109 | "Shekel", 110 | "SixHumpCamel", 111 | "SpeedReducer", 112 | "SRN", 113 | "StyblinskiTang", 114 | "SyntheticTestFunction", 115 | "TensionCompressionString", 116 | "ThreeHumpCamel", 117 | "ToyRobust", 118 | "VehicleSafety", 119 | "WeldedBeam", 120 | "WeldedBeamSO", 121 | "ZDT1", 122 | "ZDT2", 123 | "ZDT3", 124 | "MOMFBraninCurrin", 125 | "MOMFPark", 126 | ] 127 | -------------------------------------------------------------------------------- /botorch/test_functions/utils.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | 8 | from __future__ import annotations 9 | 10 | import torch 11 | 12 | from torch import Tensor 13 | 14 | 15 | def round_nearest( 16 | X: Tensor, increment: float, bounds: tuple[float, float] | None 17 | ) -> Tensor: 18 | r"""Rounds the input tensor to the nearest multiple of `increment`. 19 | 20 | Args: 21 | X: The input to be rounded. 22 | increment: The increment to round to. 23 | bounds: An optional tuple of two floats representing the lower and upper 24 | bounds on `X`. If provided, this will round to the nearest multiple 25 | of `increment` that lies within the bounds. 26 | 27 | Returns: 28 | The rounded input. 29 | """ 30 | X_round = torch.round(X / increment) * increment 31 | if bounds is not None: 32 | X_round = torch.where(X_round < bounds[0], X_round + increment, X_round) 33 | X_round = torch.where(X_round > bounds[1], X_round - increment, X_round) 34 | return X_round 35 | -------------------------------------------------------------------------------- /botorch/test_utils/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | """ 8 | test_utils has its own directory with 'botorch/' to avoid circular dependencies: 9 | Anything in 'tests/' can depend on anything in 'botorch/test_utils/', and 10 | anything in 'botorch/test_utils/' can depend on anything in the rest of 11 | 'botorch/'. 12 | """ 13 | 14 | from botorch.test_utils.mock import mock_optimize 15 | 16 | __all__ = ["mock_optimize"] 17 | -------------------------------------------------------------------------------- /botorch/utils/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from botorch.utils.constraints import get_outcome_constraint_transforms 8 | from botorch.utils.feasible_volume import estimate_feasible_volume 9 | from botorch.utils.objective import apply_constraints, get_objective_weights_transform 10 | from botorch.utils.rounding import approximate_round 11 | from botorch.utils.sampling import ( 12 | batched_multinomial, 13 | draw_sobol_normal_samples, 14 | draw_sobol_samples, 15 | manual_seed, 16 | ) 17 | from botorch.utils.transforms import standardize, t_batch_mode_transform 18 | 19 | 20 | __all__ = [ 21 | "apply_constraints", 22 | "approximate_round", 23 | "batched_multinomial", 24 | "draw_sobol_normal_samples", 25 | "draw_sobol_samples", 26 | "estimate_feasible_volume", 27 | "get_objective_weights_transform", 28 | "get_outcome_constraint_transforms", 29 | "manual_seed", 30 | "standardize", 31 | "t_batch_mode_transform", 32 | ] 33 | -------------------------------------------------------------------------------- /botorch/utils/constants.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from __future__ import annotations 8 | 9 | from collections.abc import Iterator 10 | 11 | from functools import lru_cache 12 | from numbers import Number 13 | 14 | import torch 15 | from torch import Tensor 16 | 17 | 18 | @lru_cache(maxsize=None) 19 | def get_constants( 20 | values: Number | Iterator[Number], 21 | device: torch.device | None = None, 22 | dtype: torch.dtype | None = None, 23 | ) -> Tensor | tuple[Tensor, ...]: 24 | r"""Returns scalar-valued Tensors containing each of the given constants. 25 | Used to expedite tensor operations involving scalar arithmetic. Note that 26 | the returned Tensors should not be modified in-place.""" 27 | if isinstance(values, Number): 28 | return torch.full((), values, dtype=dtype, device=device) 29 | 30 | return tuple(torch.full((), val, dtype=dtype, device=device) for val in values) 31 | 32 | 33 | def get_constants_like( 34 | values: Number | Iterator[Number], 35 | ref: Tensor, 36 | ) -> Tensor | Iterator[Tensor]: 37 | return get_constants(values, device=ref.device, dtype=ref.dtype) 38 | -------------------------------------------------------------------------------- /botorch/utils/evaluation.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from math import log 8 | 9 | import torch 10 | from botorch.utils.transforms import is_fully_bayesian 11 | from gpytorch.models.exact_gp import ExactGP 12 | 13 | MLL = "MLL" 14 | AIC = "AIC" 15 | BIC = "BIC" 16 | 17 | 18 | def compute_in_sample_model_fit_metric(model: ExactGP, criterion: str) -> float: 19 | """Compute a in-sample model fit metric. 20 | 21 | Args: 22 | model: A fitted ExactGP. 23 | criterion: Evaluation criterion. One of "MLL", "AIC", "BIC". AIC 24 | penalizes the MLL based on the number of parameters. BIC uses 25 | a slightly different penalty based on the number of parameters 26 | and data points. 27 | 28 | Returns: 29 | The in-sample evaluation metric. 30 | """ 31 | if criterion not in (AIC, BIC, MLL): 32 | raise ValueError(f"Invalid evaluation criterion {criterion}.") 33 | if is_fully_bayesian(model=model): 34 | model.train(reset=False) 35 | else: 36 | model.train() 37 | with torch.no_grad(): 38 | output = model(*model.train_inputs) 39 | output = model.likelihood(output) 40 | mll = output.log_prob(model.train_targets) 41 | # compute average MLL over MCMC samples if the model is fully bayesian 42 | mll_scalar = mll.mean().item() 43 | model.eval() 44 | num_params = sum(p.numel() for p in model.parameters()) 45 | if is_fully_bayesian(model=model): 46 | num_params /= mll.shape[0] 47 | if criterion == AIC: 48 | return 2 * num_params - 2 * mll_scalar 49 | elif criterion == BIC: 50 | return num_params * log(model.train_inputs[0].shape[-2]) - 2 * mll_scalar 51 | return mll_scalar 52 | -------------------------------------------------------------------------------- /botorch/utils/multi_objective/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from botorch.utils.multi_objective.hypervolume import Hypervolume, infer_reference_point 8 | from botorch.utils.multi_objective.pareto import is_non_dominated 9 | from botorch.utils.multi_objective.scalarization import get_chebyshev_scalarization 10 | 11 | 12 | __all__ = [ 13 | "get_chebyshev_scalarization", 14 | "infer_reference_point", 15 | "is_non_dominated", 16 | "Hypervolume", 17 | ] 18 | -------------------------------------------------------------------------------- /botorch/utils/multi_objective/box_decompositions/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | 8 | from botorch.utils.multi_objective.box_decompositions.box_decomposition_list import ( # noqa E501 9 | BoxDecompositionList, 10 | ) 11 | from botorch.utils.multi_objective.box_decompositions.dominated import ( 12 | DominatedPartitioning, 13 | ) 14 | from botorch.utils.multi_objective.box_decompositions.non_dominated import ( 15 | FastNondominatedPartitioning, 16 | NondominatedPartitioning, 17 | ) 18 | from botorch.utils.multi_objective.box_decompositions.utils import ( 19 | compute_dominated_hypercell_bounds_2d, 20 | compute_non_dominated_hypercell_bounds_2d, 21 | ) 22 | 23 | 24 | __all__ = [ 25 | "compute_dominated_hypercell_bounds_2d", 26 | "compute_non_dominated_hypercell_bounds_2d", 27 | "BoxDecompositionList", 28 | "DominatedPartitioning", 29 | "FastNondominatedPartitioning", 30 | "NondominatedPartitioning", 31 | ] 32 | -------------------------------------------------------------------------------- /botorch/utils/multi_objective/box_decompositions/dominated.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | r"""Algorithms for partitioning the dominated space into hyperrectangles.""" 8 | 9 | from __future__ import annotations 10 | 11 | from botorch.utils.multi_objective.box_decompositions.box_decomposition import ( 12 | FastPartitioning, 13 | ) 14 | from botorch.utils.multi_objective.box_decompositions.utils import ( 15 | compute_dominated_hypercell_bounds_2d, 16 | get_partition_bounds, 17 | ) 18 | from torch import Tensor 19 | 20 | 21 | class DominatedPartitioning(FastPartitioning): 22 | r"""Partition dominated space into axis-aligned hyperrectangles. 23 | 24 | This uses the Algorithm 1 from [Lacour17]_. 25 | 26 | Example: 27 | >>> bd = DominatedPartitioning(ref_point, Y) 28 | """ 29 | 30 | def _partition_space_2d(self) -> None: 31 | r"""Partition the non-dominated space into disjoint hypercells. 32 | 33 | This direct method works for `m=2` outcomes. 34 | """ 35 | cell_bounds = compute_dominated_hypercell_bounds_2d( 36 | # flip self.pareto_Y because it is sorted in decreasing order (since 37 | # self._pareto_Y was sorted in increasing order and we multiplied by -1) 38 | pareto_Y_sorted=self.pareto_Y.flip(-2), 39 | ref_point=self.ref_point, 40 | ) 41 | self.hypercell_bounds = cell_bounds 42 | 43 | def _get_partitioning(self) -> None: 44 | r"""Get the bounds of each hypercell in the decomposition.""" 45 | minimization_cell_bounds = get_partition_bounds( 46 | Z=self._Z, U=self._U, ref_point=self._neg_ref_point.view(-1) 47 | ) 48 | cell_bounds = -minimization_cell_bounds.flip(0) 49 | self.hypercell_bounds = cell_bounds 50 | 51 | def _compute_hypervolume_if_y_has_data(self) -> Tensor: 52 | r"""Compute hypervolume that is dominated by the Pareto Frontier. 53 | 54 | Returns: 55 | A `(batch_shape)`-dim tensor containing the hypervolume dominated by 56 | each Pareto frontier. 57 | """ 58 | return ( 59 | (self.hypercell_bounds[1] - self.hypercell_bounds[0]) 60 | .prod(dim=-1) 61 | .sum(dim=-1) 62 | ) 63 | 64 | def _get_single_cell(self) -> None: 65 | r"""Set the partitioning to be a single cell in the case of no Pareto points.""" 66 | # Set lower and upper bounds to be the reference point to define an empty cell 67 | cell_bounds = self.ref_point.expand( 68 | 2, *self._neg_pareto_Y.shape[:-2], 1, self.num_outcomes 69 | ).clone() 70 | self.hypercell_bounds = cell_bounds 71 | -------------------------------------------------------------------------------- /botorch/utils/multitask.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | r""" 8 | Helpers for multitask modeling. 9 | """ 10 | 11 | from __future__ import annotations 12 | 13 | import torch 14 | from gpytorch.distributions import MultitaskMultivariateNormal 15 | from gpytorch.distributions.multivariate_normal import MultivariateNormal 16 | from linear_operator import to_linear_operator 17 | 18 | 19 | def separate_mtmvn(mvn: MultitaskMultivariateNormal) -> list[MultivariateNormal]: 20 | """ 21 | Separate a MTMVN into a list of MVNs, where covariance across data within each task 22 | are preserved, while covariance across task are dropped. 23 | """ 24 | # T150340766 Upstream as a class method on gpytorch MultitaskMultivariateNormal. 25 | full_covar = mvn.lazy_covariance_matrix 26 | num_data, num_tasks = mvn.mean.shape[-2:] 27 | if mvn._interleaved: 28 | data_indices = torch.arange( 29 | 0, num_data * num_tasks, num_tasks, device=full_covar.device 30 | ).view(-1, 1, 1) 31 | task_indices = torch.arange(num_tasks, device=full_covar.device) 32 | else: 33 | data_indices = torch.arange(num_data, device=full_covar.device).view(-1, 1, 1) 34 | task_indices = torch.arange( 35 | 0, num_data * num_tasks, num_data, device=full_covar.device 36 | ) 37 | slice_ = (data_indices + task_indices).transpose(-1, -3) 38 | data_covars = full_covar[..., slice_, slice_.transpose(-1, -2)] 39 | mvns = [] 40 | for c in range(num_tasks): 41 | mvns.append( 42 | MultivariateNormal( 43 | mvn.mean[..., c], to_linear_operator(data_covars[..., c, :, :]) 44 | ) 45 | ) 46 | return mvns 47 | -------------------------------------------------------------------------------- /botorch/utils/probability/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from botorch.utils.probability.bvn import bvn, bvnmom 8 | from botorch.utils.probability.lin_ess import LinearEllipticalSliceSampler 9 | from botorch.utils.probability.mvnxpb import MVNXPB 10 | from botorch.utils.probability.truncated_multivariate_normal import ( 11 | TruncatedMultivariateNormal, 12 | ) 13 | from botorch.utils.probability.unified_skew_normal import UnifiedSkewNormal 14 | from botorch.utils.probability.utils import ndtr 15 | 16 | 17 | __all__ = [ 18 | "bvn", 19 | "bvnmom", 20 | "LinearEllipticalSliceSampler", 21 | "MVNXPB", 22 | "ndtr", 23 | "TruncatedMultivariateNormal", 24 | "UnifiedSkewNormal", 25 | ] 26 | -------------------------------------------------------------------------------- /botorch/utils/types.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from __future__ import annotations 8 | 9 | 10 | class _DefaultType(type): 11 | r""" 12 | Private class whose sole instance `DEFAULT` is as a special indicator 13 | representing that a default value should be assigned to an argument. 14 | Typically used in cases where `None` is an allowed argument. 15 | """ 16 | 17 | 18 | DEFAULT = _DefaultType("DEFAULT", (), {}) 19 | -------------------------------------------------------------------------------- /botorch_community/README.md: -------------------------------------------------------------------------------- 1 | This directory contains methods contributed by the BoTorch community. 2 | 3 | Each file should list the GitHub handle of its contributors, who may be asked 4 | for help in maintaining & updating the code against future versions for BoTorch 5 | and dependencies. 6 | -------------------------------------------------------------------------------- /botorch_community/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Meta Platforms, Inc. and affiliates. 2 | # 3 | # This source code is licensed under the MIT license found in the 4 | # LICENSE file in the root directory of this source tree. 5 | -------------------------------------------------------------------------------- /botorch_community/acquisition/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Meta Platforms, Inc. and affiliates. 2 | # 3 | # This source code is licensed under the MIT license found in the 4 | # LICENSE file in the root directory of this source tree. 5 | 6 | from botorch_community.acquisition.bayesian_active_learning import ( 7 | qBayesianQueryByComittee, 8 | qBayesianVarianceReduction, 9 | qStatisticalDistanceActiveLearning, 10 | ) 11 | 12 | # NOTE: This import is needed to register the input constructors. 13 | from botorch_community.acquisition.input_constructors import ( # noqa F401 14 | acqf_input_constructor, 15 | ) 16 | from botorch_community.acquisition.rei import ( 17 | LogRegionalExpectedImprovement, 18 | qLogRegionalExpectedImprovement, 19 | ) 20 | from botorch_community.acquisition.scorebo import qSelfCorrectingBayesianOptimization 21 | 22 | __all__ = [ 23 | "LogRegionalExpectedImprovement", 24 | "qBayesianQueryByComittee", 25 | "qBayesianVarianceReduction", 26 | "qLogRegionalExpectedImprovement", 27 | "qSelfCorrectingBayesianOptimization", 28 | "qStatisticalDistanceActiveLearning", 29 | ] 30 | -------------------------------------------------------------------------------- /botorch_community/acquisition/input_constructors.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | r""" 8 | A registry of helpers for generating inputs to acquisition function 9 | constructors programmatically from a consistent input format. 10 | 11 | Contributor: hvarfner (bayesian_active_learning, scorebo) 12 | """ 13 | 14 | from __future__ import annotations 15 | 16 | from typing import List, Optional, Tuple 17 | 18 | import torch 19 | from botorch.acquisition.input_constructors import acqf_input_constructor 20 | from botorch.acquisition.objective import ScalarizedPosteriorTransform 21 | from botorch.acquisition.utils import get_optimal_samples 22 | from botorch.models.model import Model 23 | from botorch_community.acquisition.bayesian_active_learning import ( 24 | qBayesianQueryByComittee, 25 | qBayesianVarianceReduction, 26 | qStatisticalDistanceActiveLearning, 27 | ) 28 | from botorch_community.acquisition.scorebo import qSelfCorrectingBayesianOptimization 29 | from torch import Tensor 30 | 31 | 32 | @acqf_input_constructor( 33 | qBayesianQueryByComittee, 34 | qBayesianVarianceReduction, 35 | ) 36 | def construct_inputs_BAL( 37 | model: Model, 38 | X_pending: Optional[Tensor] = None, 39 | ): 40 | inputs = { 41 | "model": model, 42 | "X_pending": X_pending, 43 | } 44 | return inputs 45 | 46 | 47 | @acqf_input_constructor(qStatisticalDistanceActiveLearning) 48 | def construct_inputs_SAL( 49 | model: Model, 50 | distance_metric: str = "hellinger", 51 | X_pending: Optional[Tensor] = None, 52 | ): 53 | inputs = { 54 | "model": model, 55 | "distance_metric": distance_metric, 56 | "X_pending": X_pending, 57 | } 58 | return inputs 59 | 60 | 61 | @acqf_input_constructor(qSelfCorrectingBayesianOptimization) 62 | def construct_inputs_SCoreBO( 63 | model: Model, 64 | bounds: List[Tuple[float, float]], 65 | num_optima: int = 8, 66 | posterior_transform: Optional[ScalarizedPosteriorTransform] = None, 67 | distance_metric: str = "hellinger", 68 | X_pending: Optional[Tensor] = None, 69 | ): 70 | dtype = model.train_targets.dtype 71 | # the number of optima are per model 72 | optimal_inputs, optimal_outputs = get_optimal_samples( 73 | model=model, 74 | bounds=torch.as_tensor(bounds, dtype=dtype).T, 75 | num_optima=num_optima, 76 | posterior_transform=posterior_transform, 77 | return_transformed=True, 78 | ) 79 | inputs = { 80 | "model": model, 81 | "optimal_inputs": optimal_inputs, 82 | "optimal_outputs": optimal_outputs, 83 | "distance_metric": distance_metric, 84 | "posterior_transform": posterior_transform, 85 | "X_pending": X_pending, 86 | } 87 | return inputs 88 | -------------------------------------------------------------------------------- /botorch_community/models/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Meta Platforms, Inc. and affiliates. 2 | # 3 | # This source code is licensed under the MIT license found in the 4 | # LICENSE file in the root directory of this source tree. 5 | -------------------------------------------------------------------------------- /botorch_community/models/blls.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from __future__ import annotations 8 | 9 | from abc import ABC, abstractmethod 10 | 11 | import torch 12 | import torch.nn as nn 13 | from botorch.models.model import Model 14 | from botorch_community.models.vbll_helper import DenseNormal, Normal 15 | from torch import Tensor 16 | 17 | 18 | class AbstractBLLModel(Model, ABC): 19 | def __init__(self): 20 | """Abstract class for Bayesian Last Layer (BLL) models.""" 21 | super().__init__() 22 | self.model = None 23 | 24 | @property 25 | def num_outputs(self) -> int: 26 | return self.model.num_outputs 27 | 28 | @property 29 | def num_inputs(self): 30 | return self.model.num_inputs 31 | 32 | @property 33 | def device(self): 34 | return self.model.device 35 | 36 | @abstractmethod 37 | def __call__(self, X: Tensor) -> Normal | DenseNormal: 38 | pass # pragma: no cover 39 | 40 | @abstractmethod 41 | def fit(self, *args, **kwargs): 42 | pass # pragma: no cover 43 | 44 | @abstractmethod 45 | def sample(self, sample_shape: torch.Size | None = None) -> nn.Module: 46 | pass # pragma: no cover 47 | -------------------------------------------------------------------------------- /botorch_community/models/example.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | """ 8 | This file contains a simple modification of SingleTaskGP as an example. 9 | 10 | References: 11 | 12 | .. [Example2024paper] 13 | S. Cakmak, D.Eriksson, M. Balandat, E. Bakshy. Example Paper. 14 | Proceedings of the Example Conference, 2024. 15 | 16 | Contributor: saitcakmak 17 | """ 18 | 19 | from typing import Optional 20 | 21 | from botorch.models.gp_regression import SingleTaskGP 22 | from gpytorch.kernels import RBFKernel, ScaleKernel 23 | from torch import Tensor 24 | 25 | 26 | class ExampleModel(SingleTaskGP): 27 | def __init__( 28 | self, train_X: Tensor, train_Y: Tensor, train_Yvar: Optional[Tensor] = None 29 | ) -> None: 30 | r"""Initialize the example model from [Example2024paper]_. 31 | 32 | Args: 33 | train_X: A `batch_shape x n x d` tensor of training features. 34 | train_Y: A `batch_shape x n x m` tensor of training observations. 35 | train_Yvar: An optional `batch_shape x n x m` tensor of observed 36 | measurement noise. 37 | """ 38 | super().__init__( 39 | train_X=train_X, 40 | train_Y=train_Y, 41 | train_Yvar=train_Yvar, 42 | covar_module=ScaleKernel(RBFKernel(ard_num_dims=train_X.shape[-1])), 43 | ) 44 | -------------------------------------------------------------------------------- /botorch_community/models/utils/prior_fitted_network.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | import gzip 8 | import io 9 | import os 10 | from enum import Enum 11 | from typing import Optional 12 | 13 | try: 14 | import requests 15 | except ImportError: # pragma: no cover 16 | raise ImportError( 17 | "The `requests` library is required to run `download_model`. " 18 | "You can install it using pip: `pip install requests`" 19 | ) 20 | 21 | 22 | import torch 23 | import torch.nn as nn 24 | 25 | 26 | class ModelPaths(Enum): 27 | """Enum for PFN models""" 28 | 29 | pfns4bo_hebo = ( 30 | "https://github.com/automl/PFNs4BO/raw/refs/heads/main/pfns4bo" 31 | "/final_models/model_hebo_morebudget_9_unused_features_3.pt.gz" 32 | ) 33 | pfns4bo_bnn = ( 34 | "https://github.com/automl/PFNs4BO/raw/refs/heads/main/pfns4bo" 35 | "/final_models/model_sampled_warp_simple_mlp_for_hpob_46.pt.gz" 36 | ) 37 | 38 | 39 | def download_model( 40 | model_path: str | ModelPaths, 41 | proxies: Optional[dict[str, str]] = None, 42 | cache_dir: Optional[str] = None, 43 | ) -> nn.Module: 44 | """Download and load PFN model weights from a URL. 45 | 46 | Args: 47 | model_path: A string representing the URL of the model to load or a ModelPaths 48 | enum. 49 | proxies: An optional dictionary mapping from network protocols, e.g. ``http``, 50 | to proxy addresses. 51 | cache_dir: The cache dir to use, if not specified we will use 52 | ``/tmp/botorch_pfn_models`` 53 | 54 | Returns: 55 | A PFN model. 56 | """ 57 | if isinstance(model_path, ModelPaths): 58 | model_path = model_path.value 59 | 60 | cache_dir = cache_dir if cache_dir is not None else "/tmp/botorch_pfn_models" 61 | os.makedirs(cache_dir, exist_ok=True) 62 | cache_path = os.path.join(cache_dir, model_path.split("/")[-1]) 63 | 64 | if not os.path.exists(cache_path): 65 | # Download the model weights 66 | response = requests.get(model_path, proxies=proxies or None) 67 | response.raise_for_status() 68 | 69 | # Decompress the gzipped model weights 70 | with gzip.GzipFile(fileobj=io.BytesIO(response.content)) as gz: 71 | model = torch.load(gz, map_location=torch.device("cpu")) 72 | 73 | # Save the model to cache 74 | torch.save(model, cache_path) 75 | print("saved at: ", cache_path) 76 | else: 77 | # Load the model from cache 78 | model = torch.load(cache_path, map_location=torch.device("cpu")) 79 | print("loaded from cache: ", cache_path) 80 | 81 | return model 82 | -------------------------------------------------------------------------------- /botorch_community/posteriors/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Meta Platforms, Inc. and affiliates. 2 | # 3 | # This source code is licensed under the MIT license found in the 4 | # LICENSE file in the root directory of this source tree. 5 | -------------------------------------------------------------------------------- /botorch_community/posteriors/bll_posterior.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # Copyright (c) Meta Platforms, Inc. and affiliates. 3 | # 4 | # This source code is licensed under the MIT license found in the 5 | # LICENSE file in the root directory of this source tree. 6 | 7 | from __future__ import annotations 8 | 9 | import math 10 | 11 | import torch 12 | from botorch.posteriors import GPyTorchPosterior, Posterior 13 | 14 | from botorch_community.models.blls import AbstractBLLModel 15 | 16 | from torch import Tensor 17 | 18 | 19 | class BLLPosterior(Posterior): 20 | def __init__( 21 | self, 22 | posterior: GPyTorchPosterior, 23 | model: AbstractBLLModel, 24 | X: Tensor, 25 | output_dim: int, 26 | ): 27 | """A posterior for Bayesian last layer models. 28 | 29 | Args: 30 | posterior: A posterior object. 31 | model: A BLL model 32 | X: Input data on which the posterior was computed. 33 | output_dim: Output dimension of the model. 34 | """ 35 | super().__init__() 36 | self.posterior = posterior 37 | self.model = model 38 | self.output_dim = output_dim 39 | self.X = X 40 | self._is_mt = output_dim > 1 41 | 42 | def rsample( 43 | self, 44 | sample_shape: torch.Size | None = None, 45 | ) -> Tensor: 46 | """ 47 | For VBLLs, we need to sample from W and then create the 48 | generalized linear model to get posterior samples. 49 | 50 | Args: 51 | sample_shape: The shape of the samples to be drawn. If None, a single 52 | sample is drawn. Otherwise, the shape should be a tuple of integers 53 | representing the desired dimensions. 54 | 55 | Returns: 56 | A `(sample_shape) x N x output_dim`-dim Tensor of maximum posterior samples. 57 | """ 58 | n_samples = 1 if sample_shape is None else math.prod(sample_shape) 59 | samples_list = [self.model.sample()(self.X) for _ in range(n_samples)] 60 | samples = torch.stack(samples_list, dim=0) 61 | 62 | # reshape to [sample_shape, n, output_dim] 63 | sample_shape = torch.Size([1]) if sample_shape is None else sample_shape 64 | new_shape = sample_shape + samples.shape[-2:] 65 | return samples.reshape(new_shape) 66 | 67 | @property 68 | def mean(self) -> Tensor: 69 | """The posterior mean.""" 70 | # Directly return the mean from the underlying posterior 71 | return self.posterior.mean 72 | 73 | @property 74 | def variance(self) -> Tensor: 75 | """The posterior variance.""" 76 | # Directly return the variance from the underlying posterior 77 | return self.posterior.variance 78 | 79 | @property 80 | def device(self) -> torch.device: 81 | return self.posterior.device 82 | 83 | @property 84 | def dtype(self) -> torch.dtype: 85 | """The torch dtype of the distribution.""" 86 | return self.posterior.dtype 87 | -------------------------------------------------------------------------------- /botorch_logo_lockup.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytorch/botorch/d247a33486c22bc5c6f16107db3e71f1e3bf3acd/botorch_logo_lockup.png -------------------------------------------------------------------------------- /docs/assets/EI_MC_qMC.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytorch/botorch/d247a33486c22bc5c6f16107db3e71f1e3bf3acd/docs/assets/EI_MC_qMC.png -------------------------------------------------------------------------------- /docs/assets/EI_optimal_val_hist.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytorch/botorch/d247a33486c22bc5c6f16107db3e71f1e3bf3acd/docs/assets/EI_optimal_val_hist.png -------------------------------------------------------------------------------- /docs/assets/EI_optimizer_hist.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytorch/botorch/d247a33486c22bc5c6f16107db3e71f1e3bf3acd/docs/assets/EI_optimizer_hist.png -------------------------------------------------------------------------------- /docs/assets/EI_resampling_fixed.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pytorch/botorch/d247a33486c22bc5c6f16107db3e71f1e3bf3acd/docs/assets/EI_resampling_fixed.png -------------------------------------------------------------------------------- /docs/constraints.md: -------------------------------------------------------------------------------- 1 | --- 2 | id: constraints 3 | title: Constraints 4 | --- 5 | 6 | BoTorch supports two distinct types of constraints: Parameter constraints 7 | and outcome constraints. 8 | 9 | 10 | ### Parameter Constraints 11 | 12 | Parameter constraints are constraints on the input space that restrict the 13 | values of the generated candidates. That is, rather than just living inside 14 | a bounding box defined by the `bounds` argument to `optimize_acqf` (or its 15 | derivates), candidate points may be further constrained by linear (in)equality 16 | constraints, specified by the `inequality_constraints` and `equality_constraints` 17 | arguments to `optimize_acqf`. 18 | 19 | Parameter constraints are used e.g. when certain configurations are infeasible 20 | to implement, or would result in excessive costs. These constraints do not affect 21 | the model directly, only indirectly in the sense that all newly generated and 22 | later observed points will satisfy these constraints. In particular, you may 23 | have a model that is fit on points that do not satisfy a certain set of parameter 24 | constraints, but still generate candidates subject to those constraints. 25 | 26 | 27 | ### Outcome Constraints 28 | 29 | In the context of Bayesian Optimization, outcome constraints usually mean 30 | constraints on a (black-box) outcome that needs to be modeled, just like 31 | the objective function is modeled by a surrogate model. Various approaches 32 | for handling these types of constraints have been proposed. A popular one that 33 | is also adopted by BoTorch for Monte Carlo acquistion functions is to multiply 34 | the acquisition utility by the feasibility indicator of the modeled outcome 35 | ([^Gardner2014], [^Letham2017]). The approach can be utilized by passing 36 | `constraints` to the constructors of compatible acquisition functions, 37 | e.g. any `SampleReducingMCAcqquisitionFunction` with a positive acquisition utility, 38 | like expected improvement. 39 | Notably, if the constraint and objective models are statistically independent, 40 | the constrained expected improvement variant is mathematically equivalent to the 41 | unconstrained expected improvement of the objective, multiplied by the probability of 42 | feasibility under the modeled outcome constraint. 43 | 44 | See the [Closed-Loop Optimization](tutorials/closed_loop_botorch_only) 45 | tutorial for an example of using outcome constraints in BoTorch. 46 | 47 | 48 | 49 | [^Gardner2014]: J.R. Gardner, M. J. Kusner, Z. E. Xu, K. Q. Weinberger and 50 | J. P. Cunningham. Bayesian Optimization with Inequality Constraints. ICML 2014. 51 | 52 | [^Letham2017]: B. Letham, B. Karrer, G. Ottoni and E. Bakshy. Constrained Bayesian optimization with noisy experiments. Bayesian Analysis 14(2), 2019. 53 | -------------------------------------------------------------------------------- /docs/posteriors.md: -------------------------------------------------------------------------------- 1 | --- 2 | id: posteriors 3 | title: Posteriors 4 | --- 5 | 6 | A BoTorch `Posterior` object is a layer of 7 | abstraction that separates the specific model used from the evaluation (and 8 | subsequent optimization) of acquisition functions. 9 | In the simplest case, a posterior is a lightweight wrapper around an explicit 10 | distribution object from `torch.distributions` (or `gpytorch.distributions`). 11 | However, a BoTorch `Posterior` can be any distribution (even an implicit one), 12 | so long as one can sample from that distribution. For example, a posterior could 13 | be represented implicitly by some base distribution mapped through a neural network. 14 | 15 | While the analytic acquisition functions assume that the posterior is a 16 | multivariate Gaussian, the Monte-Carlo (MC) based acquisition functions do not make any 17 | assumptions about the underlying distribution. Rather, the MC-based acquisition 18 | functions only require that the posterior can generate samples through an `rsample` 19 | method. As long as the posterior implements the [`Posterior`](https://botorch.readthedocs.io/en/latest/posteriors.html#botorch.posteriors.posterior.Posterior) 20 | interface, it can be used with an MC-based acquisition function. In addition, note that 21 | gradient-based acquisition function optimization requires the ability to back-propagate 22 | gradients through the MC samples. 23 | 24 | For GP models based on GPyTorch for which the posterior distribution is a 25 | multivariate Gaussian, 26 | [`GPyTorchPosterior`](https://botorch.readthedocs.io/en/latest/posteriors.html#botorch.posteriors.gpytorch.GPyTorchPosterior) should be used. 27 | -------------------------------------------------------------------------------- /docs/samplers.md: -------------------------------------------------------------------------------- 1 | --- 2 | id: samplers 3 | title: Monte Carlo Samplers 4 | --- 5 | 6 | MC-based acquisition functions rely on the reparameterization trick, which 7 | transforms some set of $\epsilon$ from some base distribution into a target 8 | distribution. For example, when drawing posterior samples from a Gaussian 9 | process, the classical parameterization is $\mu(x) + L(x) \epsilon$, where 10 | $\epsilon$ are i.i.d standard normal, $\mu$ is the mean of the posterior, and $L(x)$ is 11 | a root decomposition of the covariance matrix such that $L(x)L(x)^T = \Sigma(x)$. 12 | 13 | Exactly how base samples are generated when using the reparameterization trick 14 | can have substantial effects on the convergence of gradients estimated from 15 | these samples. Because of this, BoTorch implements a generic module capable of 16 | flexible sampling from any type of probabilistic model. 17 | 18 | A `MCSampler` is a `Module` that provides base samples from a `Posterior` object. 19 | These samplers may then in turn be used in conjunction with MC-based acquisition 20 | functions. BoTorch includes two types of MC samplers for sampling isotropic 21 | normal deviates: a vanilla, normal sampler (`IIDNormalSampler`) and randomized 22 | quasi-Monte Carlo sampler (`SobolQMCNormalSampler`). 23 | 24 | For most use cases, we recommend using `SobolQMCNormalSampler`, as it tends to 25 | produce more accurate (i.e. lower variance) gradient estimates with much fewer 26 | samples relative to the `IIDNormalSampler`. To experiment with alternative 27 | sampling procedures, please see the source code for `SobolQMCNormalSampler` as 28 | an example. 29 | -------------------------------------------------------------------------------- /docs/tutorials/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: BoTorch Tutorials 3 | --- 4 | The tutorials here will help you understand and use BoTorch in 5 | your own work. They assume that you are familiar with both 6 | Bayesian optimization (BO) and PyTorch. 7 | * If you are new to BO, we recommend you start with the 8 | [Ax docs](https://ax.dev/docs/bayesopt) and the 9 | following 10 | [tutorial paper](https://arxiv.org/abs/1807.02811). 11 | * If you are new to PyTorch, the easiest way to get started is 12 | with the [What is PyTorch?](https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#sphx-glr-beginner-blitz-tensor-tutorial-py) 13 | tutorial. 14 | 15 | 16 |
Error: Loading Plotly timed out.16 | ) : ( 17 |