Opt-einsum

Latest version: v3.3.0

Safety actively analyzes 627821 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 4

2.1.1

Bug Fixes:
- Minor tweak to release procedure.

2.1.0

opt_einsum` continues to improve its support for additional backends beyond NumPy with PyTorch.

We have also published the opt_einsum package in the Journal of Open Source Software. If you use this package in your work, please consider [citing us](https://doi.org/10.21105/joss.00753)!

New features:

- PyTorch backend support
- Tensorflow eager-mode execution backend support

Enhancements:

- Intermediate tensordot-like expressions are now ordered to avoid transposes.
- CI now uses conda backend to better support GPU and tensor libraries.
- Now accepts arbitrary unicode indices rather than a subset.
- New auto path option which switches between optimal and greedy at four tensors.

Bug fixes:

- Fixed issue where broadcast indices were incorrectly locked out of tensordot-like evaluations even after their dimension was broadcast.

2.0.1

opt_einsum` is a powerful tensor contraction order optimizer for NumPy and related ecosystems.

New Features
- Allows unlimited Unicode indices.
- Adds a Journal of Open-Source Software paper.
- Minor documentation improvements.

2.0.0

opt_einsum` is a powerful tensor contraction order optimizer for NumPy and related ecosystems.

New Features
- Expressions can be precompiled so that the expression optimization need not happen multiple times.
- The `greedy` order optimization algorithm has been tuned to be able to handle hundreds of tensors in several seconds.
- Input indices can now be unicode so that expressions can have many thousands of indices.
- GPU and distributed computing backends have been added such as Dask, TensorFlow, CUPy, Theano, and Sparse.

Bug Fixes
- A error effecting cases where opt_einsum mistook broadcasting operations for matrix multiply has been fixed.
- Most error messages are now more expressive.

1.0

Official 1.0 release.

Einsum is a very powerful function for contracting tensors of arbitrary dimension and index. However, it is only optimized to contract two terms at a time resulting in non-optimal scaling for contractions with many terms. Opt_einsum aims to fix this by optimizing the contraction order which can lead to arbitrarily large speed ups at the cost of additional intermediate tensors.

Opt_einsum is also implemented into the `np.einsum` function as of NumPy v1.12.

0.2.0

A large step towards to a full 1.0 release. BLAS usage is now automatically applied to all operations. Future releases will be more careful with regard to views and needless data copying.

Page 3 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.