This release primarily focuses on expanding the suite of available path technologies to provide better optimization characistics for 4-20 tensors while decreasing the time to find paths for 50-200+ tensors. See Path Overview for more information.
New Features:
- (60) A new greedy implementation has been added which is up to two orders of magnitude faster for 200 tensors.
- (73) Adds a new branch path that uses greedy ideas to prune the optimal exploration space to provide a better path than greedy at sub optimal cost.
- (73) Adds a new auto keyword to the `opt_einsum.contract` path option. This keyword automatically chooses the best path technology that takes under 1ms to execute.
Enhancements:
- (61) The `opt_einsum.contract` path keyword has been changed to optimize to more closely match NumPy. path will be deprecated in the future.
- (61) The `opt_einsum.contract_path` now returns a `opt_einsum.contract.PathInfo` object that can be queried for the scaling, flops, and intermediates of the path. The print representation of this object is identical to before.
- (61) The default memory_limit is now unlimited by default based on community feedback.
- (66) The Torch backend will now use tensordot when using a version of Torch which includes this functionality.
- (68) Indices can now be any hashable object when provided in the "Interleaved Input" syntax.
- (74) Allows the default transpose operation to be overridden to take advantage of more advanced tensor transpose libraries.
- (73) The optimal path is now significantly faster.
Bug fixes:
- (72) Fixes the "Interleaved Input" syntax and adds documentation.