Opacus

Latest version: v1.4.1

Safety actively analyzes 621751 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 4

1.4.1

Bug fixes
* Fix DP MultiheadAttention (598)
* Fix: make prv accountant robust to larger epsilons (606)
* Fix the corner case when the optimizer has no trainable parameters (619)

1.4

Highlight: Upgraded to PyTorch 1.13+ as required dependency

New features
* Added clipping schedulers (556)
* Util to check per sample gradients (532)

Bug fixes
* Align DataLoader interface with vanilla PyTorch (543)
* Fix GDP accountant epsilon retrieval changing internal state (541)
* Add option to specify number of steps in UniformSampler (550)
* Fix privacy computation script (565)

1.3

New features
* Implement the `PRVAccountant` based on the paper [Numerical Composition of Differential Privacy](https://arxiv.org/abs/2106.02848) (#493)
* Support `nn.EmbeddingBag` (519)

Bug fixes
* Fix benchmarks (503, 507, 508)
* Align `make_private_with_epsilon` with `make_private` (509, 526)
* Test fixes (513, 515, 527, 533)
* Summed discriminator losses to perform one backprop step (474)
* Fixed issue with missing argument in MNIST example (520)
* Functorch gradients: investigation and fix (510)
* Support empty batches (530)

1.2

New ways to compute per sample gradients
We're glad to present Opacus v1.2, which contains some major updates to per sample gradient computation mechanisms
and includes all the good stuff from the recent PyTorch releases.
* Functorch - per sample gradients for all
* ExpandedWeights - yet another way to compute per sample gradients
* See [Release notes](https://github.com/pytorch/opacus/releases/tag/v1.2.0)
and [GradSampleModule README](https://github.com/pytorch/opacus/blob/main/opacus/grad_sample/README.md)
for detailed feature explanation

Other improvements
* Fix `utils.unfold2d` with non-symmetric pad/dilation/kernel_size/stride (443)
* Add support for "same" and "valid" padding for hooks-based grad sampler for convolution layers
* Improve model validation to support frozen layers and catch copied parameters (489)
* Remove annoying logging from `set_to_none` (471)
* Improved documentation (480, 478, 482, 485, 486, 487, 488)
* Imtegration test improvements (407, 479, 481. 473)

1.1.3

Bug fixes
* Support layers with a mix of frozen and learnable parameters (437)
* Throw an error when params in optimizer are not the same as that of module's in make_private (439)
* Fix unfold2d and add test (443)

Miscellaneous
* Fix typos in DDP tutorial (438)
* Replace torch einsum with opt_einsum (440)

1.1.2

Bug fixes
* Support tied parameters (417)
* Fix callsite sensitiveness of `zero_grad()` (422, 423)
* Improve microbenchmark argument parsing and tests (425)
* Fix opacus nn.functional import (426)
Miscellaneous
* Add microbenchmarks (412, 416)
* Add more badges to readme (424)

Page 1 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.