Botorch

Latest version: v0.10.0

Safety actively analyzes 621751 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 7

0.10.0

New Features
* Introduce updated guidelines and a new directory for community contributions (2167).
* Add `qEUBO` preferential acquisition function (2192).
* Add Multi Information Source Augmented GP (2152).

Bug Fixes
* Fix `condition_on_observations` in fully Bayesian models (2151).
* Fix for bug that occurs when splitting single-element bins, use default BoTorch kernel for BAxUS. (2165).
* Fix a bug when non-linear constraints are used with `q > 1` (2168).
* Remove unsupported `X_pending` from `qMultiFidelityLowerBoundMaxValueEntropy` constructor (2193).
* Don't allow `data_fidelities=[]` in `SingleTaskMultiFidelityGP` (2195).
* Fix `EHVI`, `qEHVI`, and `qLogEHVI` input constructors (2196).
* Fix input constructor for `qMultiFidelityMaxValueEntropy` (2198).
* Add ability to not deduplicate points in `_is_non_dominated_loop` (2203).

Other Changes
* Minor improvements to `MVaR` risk measure (2150).
* Add support for multitask models to `ModelListGP` (2154).
* Support unspecified noise in `ContextualDataset` (2155).
* Update `HVKG` sampler to reflect the number of model outputs (2160).
* Release restriction in `OneHotToNumeric` that the categoricals are the trailing dimensions (2166).
* Standardize broadcasting logic of `q(Log)EI`'s `best_f` and `compute_best_feasible_objective` (2171).
* Use regular inheritance instead of dispatcher to special-case `PairwiseGP` logic (2176).
* Support `PBO` in `EUBO`'s input constructor (2178).
* Add `posterior_transform` to `qMaxValueEntropySearch`'s input constructor (2181).
* Do not normalize or standardize dimension if all values are equal (2185).
* Reap deprecated support for objective with 1 arg in `GenericMCObjective` (2199).
* Consistent signature for `get_objective_weights_transform` (2200).
* Update context order handling in `ContextualDataset` (2205).
* Update contextual models for use in MBM (2206).
* Remove `(Identity)AnalyticMultiOutputObjective` (2208).
* Reap deprecated support for `soft_eval_constraint` (2223). Please use `botorch.utils.sigmoid` instead.

Compatibility
* Pin `mpmath <= 1.3.0` to avoid CI breakages due to removed modules in the latest alpha release (2222).

0.9.5

New features

Hypervolume Knowledge Gradient (HVKG):
* Add `qHypervolumeKnowledgeGradient`, which seeks to maximize the difference in hypervolume of the hypervolume-maximizing set of a fixed size after conditioning the unknown observation(s) that would be received if X were evaluated (1950, 1982, 2101).
* Add tutorial on decoupled Multi-Objective Bayesian Optimization (MOBO) with HVKG (2094).

Other new features:
* Add `MultiOutputFixedCostModel`, which is useful for decoupled scenarios where the objectives have different costs (2093).
* Enable `q > 1` in acquisition function optimization when nonlinear constraints are present (1793).
* Support different noise levels for different outputs in test functions (2136).

Bug fixes
* Fix fantasization with a `FixedNoiseGaussianLikelihood` when `noise` is known and `X` is empty (2090).
* Make `LearnedObjective` compatible with constraints in acquisition functions regardless of `sample_shape` (2111).
* Make input constructors for `qExpectedImprovement`, `qLogExpectedImprovement`, and `qProbabilityOfImprovement` compatible with `LearnedObjective` regardless of `sample_shape` (2115).
* Fix handling of constraints in `qSimpleRegret` (2141).

Other changes
* Increase default sample size for `LearnedObjective` (2095).
* Allow passing in `X` with or without fidelity dimensions in `project_to_target_fidelity` (2102).
* Use full-rank task covariance matrix by default in SAAS MTGP (2104).
* Rename `FullyBayesianPosterior` to `GaussianMixturePosterior`; add `_is_ensemble` and `_is_fully_bayesian` attributes to `Model` (2108).
* Various improvements to tutorials including speedups, improved explanations, and compatibility with newer versions of libraries.

0.9.4

Compatibility
* Re-establish compatibility with PyTorch 1.13.1 (2083).

0.9.3

Highlights
* Additional "Log" acquisition functions for multi-objective optimization with better numerical behavior, which often leads to significantly improved BO performance over their non-"Log" counterparts:
* `qLogEHVI` (2036).
* `qLogNEHVI` (2045, 2046, 2048, 2051).
* Support fully Bayesian models with `LogEI`-type acquisition functions (2058).
* `FixedNoiseGP` and `FixedNoiseMultiFidelityGP` have been deprecated, their functionalities merged into `SingleTaskGP` and `SingleTaskMultiFidelityGP`, respectively (2052, 2053).
* Removed deprecated legacy model fitting functions: `numpy_converter`, `fit_gpytorch_scipy`, `fit_gpytorch_torch`, `_get_extra_mll_args` (1995, 2050).

New Features
* Support multiple data fidelity dimensions in `SingleTaskMultiFidelityGP` and (deprecated) `FixedNoiseMultiFidelityGP` models (1956).
* Add `logsumexp` and `fatmax` to handle infinities and control asymptotic behavior in "Log" acquisition functions (1999).
* Add outcome and feature names to datasets, implement `MultiTaskDataset` (2015, 2019).
* Add constrained Hartmann and constrained Gramacy synthetic test problems (2022, 2026, 2027).
* Support observed noise in `MixedSingleTaskGP` (2054).
* Add `PosteriorStandardDeviation` acquisition function (2060).

Bug fixes
* Fix input constructors for `qMaxValueEntropy` and `qMultiFidelityKnowledgeGradient` (1989).
* Fix precision issue that arises from inconsistent data types in `LearnedObjective` (2006).
* Fix fantasization with `FixedNoiseGP` and outcome transforms and use `FantasizeMixin` (2011).
* Fix `LearnedObjective` base sample shape (2021).
* Apply constraints in `prune_inferior_points` (2069).
* Support non-batch evaluation of `PenalizedMCObjective` (2073).
* Fix `Dataset` equality checks (2077).

Other changes
* Don't allow unused `**kwargs` in input_constructors except for a defined set of exceptions (1872, 1985).
* Merge inferred and fixed noise LCE-M models (1993).
* Fix import structure in `botorch.acquisition.utils` (1986).
* Remove deprecated functionality: `weights` argument of `RiskMeasureMCObjective` and `squeeze_last_dim` (1994).
* Make `X`, `Y`, `Yvar` into properties in datasets (2004).
* Make synthetic constrained test functions subclass from `SyntheticTestFunction` (2029).
* Add `construct_inputs` to contextual GP models `LCEAGP` and `SACGP` (2057).

0.9.2

This release fixes bugs that affected Ax's modular `BotorchModel` and silently ignored outcome constraints due to naming mismatches.

Bug fixes
* Hot fix (1973) for a few issues:
* A naming mismatch between Ax's modular `BotorchModel` and the BoTorch's acquisition input constructors, leading to outcome constraints in Ax not being used with single-objective acquisition functions in Ax's modular `BotorchModel`. The naming has been updated in Ax and consistent naming is now used in input constructors for single and multi-objective acquisition functions in BoTorch.
* A naming mismatch in the acquisition input constructor `constraints` in `qNoisyLogExpectedImprovement`, which kept constraints from being used.
* A bug in `compute_best_feasible_objective` that could lead to `-inf` incumbent values.
* Fix setting seed in `get_polytope_samples` (1968)

Other changes
* Merge `SupervisedDataset` and `FixedNoiseDataset` (1945).
* Constrained tutorial updates (1967, 1970)
* Resolve issues with missing pytorch binaries with py3.11 on Mac (1966)

0.9.1

This is a very minor release; the only change from v0.9.0 is that the `linear_operator` dependency was bumped to 0.5.1 (1963). This was needed since a bug in `linear_operator` 0.5.0 caused failures with some BoTorch models.

Page 1 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.