This release includes a complete package rewrite, featuring complex tensor support, a 4-fold speed-up on the CPU, a 2-fold speed-up on the GPU, an updated API, and rewritten documentation. The release includes many backwards-compatibility-breaking changing, hence the version increment to 1.0.
A summary of changes follows:
- Support for PyTorch complex tensors. The user is now expected to pass in tensors of a shape `[batch_size, num_chans, height, width]` for a 2D imaging problem. It's sill possible to pass in real tensors - just use `[batch_size, num_chans, height, width, 2]`. The backend uses complex values for efficiency.
- A 4-fold speed-up on the CPU and a 2-fold speed-up on the GPU for table interpolation. The primary mechanism is process forking via `torch.jit.fork` - see [interp.py](https://github.com/mmuckley/torchkbnufft/blob/master/torchkbnufft/_nufft/interp.py) for details.
- The backend has been substantially rewritten to a higher code quality, adding type annotations and compiling performant-critical functions with `torch.jit.script` to get rid of the Python GIL.
- A much improved density compensation function, `calc_density_compensation_function`, thanks to a contribution of chaithyagr on the suggestion of zaccharieramzi.
- Simplified utility functions for `calc_toeplitz_kernel` and `calc_tensor_spmatrix`.
- The [documentation](https://torchkbnufft.readthedocs.io/en/stable/) has been completely rewritten, upgrading to the Read the Docs template, an improved table of contents, adding mathematical descriptions of core operators, and having a dedicated basic usage section.
- Dedicated SENSE-NUFFT operators have been removed. Wrapping these with `torch.autograd.Function` didn't give us any benefits, so there's no need to have them. Users will now pass their sensitivity maps into the `forward` function of `KbNufft` and `KbNufftAdjoint` directly.
- Rewritten notebooks and README files.
- New `CONTRIBUTING.md`.
- Removed `mrisensesim.py` as it is not a core part of the package.