Gpytorch

Latest version: v1.11

Safety actively analyzes 628969 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 6

0.1.1

Features
- Batch GPs, which previously were a feature, are now well-documented and much more stable [(see docs)](https://gpytorch.readthedocs.io/en/latest/batch_gps.html)
- Can add "fantasy observations" to models.
- Option for exact marginal log likelihood and sampling computations (this is slower, but potentially useful for debugging) (`gpytorch.settings.fast_computations`)

Bug fixes
- Easier usage of batch GPs
- Reduce bugs in [additive regression models](https://gpytorch.readthedocs.io/en/latest/examples/05_Scalable_GP_Regression_Multidimensional/KISSGP_Additive_Regression_CUDA.html)

0.1.0

0.1.0.rc5

Stability of hyperparameters
- Hyperparameters taht are constrained to be positive (e.g. variance, lengthscale, etc.) are now parameterized throught the softplus function (`log(1 + e^x)`) rather than through the log function
- This dramatically improves the numerical stability and optimization of hyperparameters
- Old models that were trained with `log` parameters will still work, but this is deprecated.
- Inference now handles certain numerical floating point round-off errors more gracefully.

Various stability improvements to variational inference

Other changes
- `GridKernel` can be used for data that lies on a perfect grid.
- New preconditioner for LazyTensors.
- Use batched cholesky functions for improved performance (requires updating PyTorch)

0.1.0.rc4

New features
- Implement diagonal correction for basic variational inference, improving predictive variance estimates. This is on by default.
- `LazyTensor._quad_form_derivative` now has a default implementation! While custom implementations are likely to still be faster in many cases, this means that it is no longer required to implement a custom `_quad_form_derivative` when implementing a new `LazyTensor` subclass.

Bug fixes
- Fix a number of critical bugs for the new variational inference.
- Do some hyperparameter tuning for the SV-DKL example notebook, and include fancier NN features like batch normalization.
- Made it more likely that operations internally preserve the ability to perform preconditioning for linear solves and log determinants. This may have a positive impact on model performance in some cases.

0.1.0.rc3

Variational inference has been refactored
- Easier to experiment with different variational approximations
- Massive performance improvement for [SV-DKL](https://github.com/cornellius-gp/gpytorch/blob/master/examples/08_Deep_Kernel_Learning/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb)

Experimental Pyro integration for variational inference
- See the [example Pyro notebooks](https://github.com/cornellius-gp/gpytorch/tree/master/examples/09_Pyro_Integration)

Lots of tiny bug fixes
(Too many to name, but everything should be better 😬)

0.1.0.rc2

Page 5 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.