Changelogs » Opt-einsum

PyUp Safety actively tracks 268,179 Python packages for vulnerabilities and notifies you when to upgrade.

Opt-einsum

3.3.0

Adds a ``object`` backend for optimized contractions on arbitrary Python objects.
  
  New Features
  - (145) Adds a ``object`` based backend so that ``contract(backend='object')`` can be used on arbitrary objects such as SymPy symbols.
  
  Enhancements
  - (140) Better error messages when the requested ``contract`` backend cannot be found.
  - (141) Adds a check with RandomOptimizers to ensure the objects are not accidentally reused for different contractions.
  - (149) Limits the ``remaining`` category for the ``contract_path`` output to only show up to 20 tensors to prevent issues with the quadratically scaling memory requirements and the number of print lines for large contractions.

3.2.1

Bug Fixes
  - (131) Fixes an einstein subscript error message to point to the correct value.
  - (135) Lazily loads JAX similar to other backends.

3.2.0

Small fixes for the ``dp`` path and support for a new mars backend.
  
  New Features
  - (109) Adds mars backend support.
  
  Enhancements
  - (110) New ``auto-hq`` and ``'random-greedy-128'`` paths.
  - (119) Fixes several edge cases in the ``dp`` path.
  
  Bug fixes
  - (127) Fixes an issue where Python 3.6 features are required while Python 3.5 is ``opt_einsum``'s stated minimum version.

3.1.0

Adds a new dynamic programming algorithm to the suite of paths.

3.0.1

Alters `setup.py` to correctly state that `opt_einsum` requires Python 3.5+. This will now correctly effect PyPI and other pip mirror downloads, v3.0.0 will be removed from PyPI to prevent further issues.

3.0.0

This release moves `opt_einsum` to be backend agnostic while adding support
  additional backends such as Jax and Autograd. Support for Python 2.7 has been dropped and Python 3.5 will become the new minimum version, a Python deprecation policy equivalent to NumPy's has been adopted.
  
  New Features
  - (78) A new random-optimizer has been implemented which uses Boltzmann weighting to explore alternative near-minimum paths using greedy-like schemes. This provides a fairly large path performance enhancements with a linear path time overhead.
  - (78) A new PathOptimizer class has been implemented to provide a framework for building new optimizers. An example is that now custom cost functions can now be provided in the greedy formalism for building custom optimizers without a large amount of additional code.
  - (81) The `backend="auto"` keyword has been implemented for `contract` allowing automatic detection of the correct backend to use based off provided tensors in the contraction.
  - (88) Autograd and Jax support have been implemented.
  - (96) Deprecates Python 2 functionality and devops improvements.
  
  Enhancements
  - (84) The `contract_path` function can now accept shape tuples rather than full tensors.
  - (84) The `contract_path` automated path algorithm decision technology has been refactored to a standalone function.

2.3.2

Bug Fixes:
  
  - (77) Fixes a PyTorch v1.0 JIT tensor shape issue.

2.3.1

Bug Fixes:
  
  - Minor tweak to release procedure.

2.3.0

This release primarily focuses on expanding the suite of available path technologies to provide better optimization characistics for 4-20 tensors while decreasing the time to find paths for 50-200+ tensors. See Path Overview for more information.
  
  New Features:
  
  - (60) A new greedy implementation has been added which is up to two orders of magnitude faster for 200 tensors.
  - (73) Adds a new branch path that uses greedy ideas to prune the optimal exploration space to provide a better path than greedy at sub optimal cost.
  - (73) Adds a new auto keyword to the `opt_einsum.contract` path option. This keyword automatically chooses the best path technology that takes under 1ms to execute.
  
  Enhancements:
  
  - (61) The `opt_einsum.contract` path keyword has been changed to optimize to more closely match NumPy. path will be deprecated in the future.
  - (61) The `opt_einsum.contract_path` now returns a `opt_einsum.contract.PathInfo` object that can be queried for the scaling, flops, and intermediates of the path. The print representation of this object is identical to before.
  - (61) The default memory_limit is now unlimited by default based on community feedback.
  - (66) The Torch backend will now use tensordot when using a version of Torch which includes this functionality.
  - (68) Indices can now be any hashable object when provided in the "Interleaved Input" syntax.
  - (74) Allows the default transpose operation to be overridden to take advantage of more advanced tensor transpose libraries.
  - (73) The optimal path is now significantly faster.
  
  Bug fixes:
  
  - (72) Fixes the "Interleaved Input" syntax and adds documentation.

2.2.0

New features:
  - (48) Intermediates can now be shared between contractions, see [here](https://optimized-einsum.readthedocs.io/en/latest/sharing_intermediates.html) for more details.
  - (53) Intermediate caching is thread safe.
  
  Enhancements:
  - (48) Expressions are now mapped to non-unicode index set so that unicode input is support for all backends.
  - (58) Adds tensorflow and theano with shared intermediates.
  
  Bug fixes:
  - (41) PyTorch indices are mapped back to a small `a-z` subset valid for PyTorch's einsum implementation.

2.1.3

Bug fixes:
  - Fixes unicode issue for large numbers of tensors in Python 2.7.
  - Fixes unicode install bug in `README.md`.

2.1.2

Bug Fixes:
  - Ensures `versioneer.py` is in MANIFEST.in for a clean pip install.

2.1.1

Bug Fixes:
  - Minor tweak to release procedure.

2.1.0

opt_einsum` continues to improve its support for additional backends beyond NumPy with PyTorch.
  
  We have also published the opt_einsum package in the Journal of Open Source Software. If you use this package in your work, please consider [citing us](https://doi.org/10.21105/joss.00753)!
  
  New features:
  
  - PyTorch backend support
  - Tensorflow eager-mode execution backend support
  
  Enhancements:
  
  - Intermediate tensordot-like expressions are now ordered to avoid transposes.
  - CI now uses conda backend to better support GPU and tensor libraries.
  - Now accepts arbitrary unicode indices rather than a subset.
  - New auto path option which switches between optimal and greedy at four tensors.
  
  Bug fixes:
  
  - Fixed issue where broadcast indices were incorrectly locked out of tensordot-like evaluations even after their dimension was broadcast.

2.0.1

opt_einsum` is a powerful tensor contraction order optimizer for NumPy and related ecosystems.
  
  New Features
  - Allows unlimited Unicode indices.
  - Adds a Journal of Open-Source Software paper.
  - Minor documentation improvements.

2.0.0

opt_einsum` is a powerful tensor contraction order optimizer for NumPy and related ecosystems.
  
  New Features
  - Expressions can be precompiled so that the expression optimization need not happen multiple times.
  - The `greedy` order optimization algorithm has been tuned to be able to handle hundreds of tensors in several seconds.
  - Input indices can now be unicode so that expressions can have many thousands of indices.
  - GPU and distributed computing backends have been added such as Dask, TensorFlow, CUPy, Theano, and Sparse.
  
  Bug Fixes
  - A error effecting cases where opt_einsum mistook broadcasting operations for matrix multiply has been fixed.
  - Most error messages are now more expressive.

1.0

Official 1.0 release.
  
  Einsum is a very powerful function for contracting tensors of arbitrary dimension and index. However, it is only optimized to contract two terms at a time resulting in non-optimal scaling for contractions with many terms. Opt_einsum aims to fix this by optimizing the contraction order which can lead to arbitrarily large speed ups at the cost of additional intermediate tensors.
  
  Opt_einsum is also implemented into the `np.einsum` function as of NumPy v1.12.

0.2.0

A large step towards to a full 1.0 release. BLAS usage is now automatically applied to all operations. Future releases will be more careful with regard to views and needless data copying.

0.1.1

Adds Python 3 support in addition to installation through a `setup.py` command.