Botorch

Latest version: v0.11.0

Safety actively analyzes 630217 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 8

0.8.1

Highlights
* This release includes changes for compatibility with the newest versions of linear_operator and gpytorch.
* Several acquisition functions now have "Log" counterparts, which provide better
numerical behavior for improvement-based acquisition functions in areas where the probability of
improvement is low. For example, `LogExpectedImprovement` (1565) should behave better than
`ExpectedImprovement`. These new acquisition functions are
* `LogExpectedImprovement` (1565).
* `LogNoisyExpectedImprovement` (1577).
* `LogProbabilityOfImprovement` (1594).
* `LogConstrainedExpectedImprovement` (1594).
* Bug fix: Stop `ModelListGP.posterior` from quietly ignoring `Log`, `Power`, and `Bilog` outcome transforms (1563).
* Turn off `fast_computations` setting in linear_operator by default (1547).

Compatibility
* Require linear_operator == 0.3.0 (1538).
* Require pyro-ppl >= 1.8.4 (1606).
* Require gpytorch == 1.9.1 (1612).

New Features
* Add `eta` to `get_acquisition_function` (1541).
* Support 0d-features in `FixedFeatureAcquisitionFunction` (1546).
* Add timeout ability to optimization functions (1562, 1598).
* Add `MultiModelAcquisitionFunction`, an abstract base class for acquisition functions that require multiple types of models (1584).
* Add `cache_root` option for qNEI in `get_acquisition_function` (1608).

Other changes
* Docstring corrections (1551, 1557, 1573).
* Removal of `_fit_multioutput_independent` and `allclose_mll` (1570).
* Better numerical behavior for fully Bayesian models (1576).
* More verbose Scipy `minimize` failure messages (1579).
* Lower-bound noise in`SaasPyroModel` to avoid Cholesky errors (1586).

Bug fixes
* Error rather than failing silently for NaN values in box decomposition (1554).
* Make `get_bounds_as_ndarray` device-safe (1567).

0.8.0

Highlights
This release includes some backwards incompatible changes.
* Refactor `Posterior` and `MCSampler` modules to better support non-Gaussian distributions in BoTorch (1486).
* Introduced a `TorchPosterior` object that wraps a PyTorch `Distribution` object and makes it compatible with the rest of `Posterior` API.
* `PosteriorList` no longer accepts Gaussian base samples. It should be used with a `ListSampler` that includes the appropriate sampler for each posterior.
* The MC acquisition functions no longer construct a Sobol sampler by default. Instead, they rely on a `get_sampler` helper, which dispatches an appropriate sampler based on the posterior provided.
* The `resample` and `collapse_batch_dims` arguments to `MCSampler`s have been removed. The `ForkedRNGSampler` and `StochasticSampler` can be used to get the same functionality.
* Refer to the PR for additional changes. We will update the website documentation to reflect these changes in a future release.
* 1191 refactors much of `botorch.optim` to operate based on closures that abstract
away how losses (and gradients) are computed. By default, these closures are created
using multiply-dispatched factory functions (such as `get_loss_closure`), which may be
customized by registering methods with an associated dispatcher (e.g. `GetLossClosure`).
Future releases will contain tutorials that explore these features in greater detail.

New Features
* Add mixed optimization for list optimization (1342).
* Add entropy search acquisition functions (1458).
* Add utilities for straight-through gradient estimators for discretization functions (1515).
* Add support for categoricals in Round input transform and use STEs (1516).
* Add closure-based optimizers (1191).

Other Changes
* Do not count hitting maxiter as optimization failure & update default maxiter (1478).
* `BoxDecomposition` cleanup (1490).
* Deprecate `torch.triangular_solve` in favor of `torch.linalg.solve_triangular` (1494).
* Various docstring improvements (1496, 1499, 1504).
* Remove `__getitem__` method from `LinearTruncatedFidelityKernel` (1501).
* Handle Cholesky errors when fitting a fully Bayesian model (1507).
* Make eta configurable in `apply_constraints` (1526).
* Support SAAS ensemble models in RFFs (1530).
* Deprecate `botorch.optim.numpy_converter` (1191).
* Deprecate `fit_gpytorch_scipy` and `fit_gpytorch_torch` (1191).

Bug Fixes
* Enforce use of float64 in `NdarrayOptimizationClosure` (1508).
* Replace deprecated np.bool with equivalent bool (1524).
* Fix RFF bug when using FixedNoiseGP models (1528).

0.7.3

Highlights
* 1454 fixes a critical bug that affected multi-output `BatchedMultiOutputGPyTorchModel`s that were using a `Normalize` or `InputStandardize` input transform and trained using `fit_gpytorch_model/mll` with `sequential=True` (which was the default until 0.7.3). The input transform buffers would be reset after model training, leading to the model being trained on normalized input data but evaluated on raw inputs. This bug had been affecting model fits since the 0.6.5 release.
* 1479 changes the inheritance structure of `Model`s in a backwards-incompatible way. If your code relies on `isinstance` checks with BoTorch `Model`s, especially `SingleTaskGP`, you should revisit these checks to make sure they still work as expected.

Compatibility
* Require linear_operator == 0.2.0 (1491).

New Features
* Introduce `bvn`, `MVNXPB`, `TruncatedMultivariateNormal`, and `UnifiedSkewNormal` classes / methods (1394, 1408).
* Introduce `AffineInputTransform` (1461).
* Introduce a `subset_transform` decorator to consolidate subsetting of inputs in input transforms (1468).

Other Changes
* Add a warning when using float dtype (1193).
* Let Pyre know that `AcquisitionFunction.model` is a `Model` (1216).
* Remove custom `BlockDiagLazyTensor` logic when using `Standardize` (1414).
* Expose `_aug_batch_shape` in `SaasFullyBayesianSingleTaskGP` (1448).
* Adjust `PairwiseGP` `ScaleKernel` prior (1460).
* Pull out `fantasize` method into a `FantasizeMixin` class, so it isn't so widely inherited (1462, 1479).
* Don't use Pyro JIT by default , since it was causing a memory leak (1474).
* Use `get_default_partitioning_alpha` for NEHVI input constructor (1481).

Bug Fixes
* Fix `batch_shape` property of `ModelListGPyTorchModel` (1441).
* Tutorial fixes (1446, 1475).
* Bug-fix for Proximal acquisition function wrapper for negative base acquisition functions (1447).
* Handle `RuntimeError` due to constraint violation while sampling from priors (1451).
* Fix bug in model list with output indices (1453).
* Fix input transform bug when sequentially training a `BatchedMultiOutputGPyTorchModel` (1454).
* Fix a bug in `_fit_multioutput_independent` that failed mll comparison (1455).
* Fix box decomposition behavior with empty or None `Y` (1489).

0.7.2

New Features
* A full refactor of model fitting methods (1134).
* This introduces a new `fit_gpytorch_mll` method that multiple-dispatches
on the model type. Users may register custom fitting routines for different
combinations of MLLs, Likelihoods, and Models.
* Unlike previous fitting helpers, `fit_gpytorch_mll` does **not** pass
`kwargs` to `optimizer` and instead introduces an optional `optimizer_kwargs`
argument.
* When a model fitting attempt fails, `botorch.fit` methods restore modules to their
original states.
* `fit_gpytorch_mll` throws a `ModelFittingError` when all model fitting attempts fail.
* Upon returning from `fit_gpytorch_mll`, `mll.training` will be `True` if fitting failed
and `False` otherwise.
* Allow custom bounds to be passed in to `SyntheticTestFunction` (1415).

Deprecations
* Deprecate weights argument of risk measures in favor of a `preprocessing_function` (1400),
* Deprecate `fit_gyptorch_model`; to be superseded by `fit_gpytorch_mll`.

Other Changes
* Support risk measures in MOO input constructors (1401).

Bug Fixes
* Fix fully Bayesian state dict loading when there are more than 10 models (1405).
* Fix `batch_shape` property of `SaasFullyBayesianSingleTaskGP` (1413).
* Fix `model_list_to_batched` ignoring the `covar_module` of the input models (1419).

0.7.1

Compatibility
* Pin GPyTorch >= 1.9.0 (1397).
* Pin linear_operator == 0.1.1 (1397).

New Features
* Implement `SaasFullyBayesianMultiTaskGP` and related utilities (1181, 1203).

Other Changes
* Support loading a state dict for `SaasFullyBayesianSingleTaskGP` (1120).
* Update `load_state_dict` for `ModelList` to support fully Bayesian models (1395).
* Add `is_one_to_many` attribute to input transforms (1396).

Bug Fixes
* Fix `PairwiseGP` on GPU (1388).

0.7.0

Compatibility
* Require python >= 3.8 (via 1347).
* Support for python 3.10 (via 1379).
* Require PyTorch >= 1.11 (via (1363).
* Require GPyTorch >= 1.9.0 (1347).
* GPyTorch 1.9.0 is a major refactor that factors out the lazy tensor
functionality into a new `LinearOperator` library, which required
a number of adjustments to BoTorch (1363, 1377).
* Require pyro >= 1.8.2 (1379).

New Features
* Add ability to generate the features appended in the `AppendFeatures` input
transform via a generic callable (1354).
* Add new synthetic test functions for sensitivity analysis (1355, 1361).

Other Changes
* Use `time.monotonic()` instead of `time.time()` to measure duration (1353).
* Allow passing `Y_samples` directly in `MARS.set_baseline_Y` (1364).

Bug Fixes
* Patch `state_dict` loading for `PairwiseGP` (1359).
* Fix `batch_shape` handling in `Normalize` and `InputStandardize` transforms (1360).

Page 3 of 8

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.