Opt-einsum

Latest version: v3.3.0

Safety actively analyzes 627119 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 4

2.3.2

Bug Fixes:

- (77) Fixes a PyTorch v1.0 JIT tensor shape issue.

2.3.1

Bug Fixes:

- Minor tweak to release procedure.

2.3.0

This release primarily focuses on expanding the suite of available path technologies to provide better optimization characistics for 4-20 tensors while decreasing the time to find paths for 50-200+ tensors. See Path Overview for more information.

New Features:

- (60) A new greedy implementation has been added which is up to two orders of magnitude faster for 200 tensors.
- (73) Adds a new branch path that uses greedy ideas to prune the optimal exploration space to provide a better path than greedy at sub optimal cost.
- (73) Adds a new auto keyword to the `opt_einsum.contract` path option. This keyword automatically chooses the best path technology that takes under 1ms to execute.

Enhancements:

- (61) The `opt_einsum.contract` path keyword has been changed to optimize to more closely match NumPy. path will be deprecated in the future.
- (61) The `opt_einsum.contract_path` now returns a `opt_einsum.contract.PathInfo` object that can be queried for the scaling, flops, and intermediates of the path. The print representation of this object is identical to before.
- (61) The default memory_limit is now unlimited by default based on community feedback.
- (66) The Torch backend will now use tensordot when using a version of Torch which includes this functionality.
- (68) Indices can now be any hashable object when provided in the "Interleaved Input" syntax.
- (74) Allows the default transpose operation to be overridden to take advantage of more advanced tensor transpose libraries.
- (73) The optimal path is now significantly faster.

Bug fixes:

- (72) Fixes the "Interleaved Input" syntax and adds documentation.

2.2.0

New features:
- (48) Intermediates can now be shared between contractions, see [here](https://optimized-einsum.readthedocs.io/en/latest/sharing_intermediates.html) for more details.
- (53) Intermediate caching is thread safe.

Enhancements:
- (48) Expressions are now mapped to non-unicode index set so that unicode input is support for all backends.
- (58) Adds tensorflow and theano with shared intermediates.

Bug fixes:
- (41) PyTorch indices are mapped back to a small `a-z` subset valid for PyTorch's einsum implementation.

2.1.3

Bug fixes:
- Fixes unicode issue for large numbers of tensors in Python 2.7.
- Fixes unicode install bug in `README.md`.

2.1.2

Bug Fixes:
- Ensures `versioneer.py` is in MANIFEST.in for a clean pip install.

Page 2 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.