Neural-tangents

Latest version: v0.6.5

Safety actively analyzes 628918 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 3

0.4.0

New feature:
- [Add continuous integration with GitHub Actions.](https://github.com/google/neural-tangents/commit/bb1b6debd0dc7009547facf794ad05cac89bc465)
- [Python 3.10 support.](https://github.com/google/neural-tangents/commit/bb1b6debd0dc7009547facf794ad05cac89bc465)

Improvements:
- Various internal refactoring and [tighter tests](https://github.com/google/neural-tangents/commit/7055fd97c7b9fb8644e8467ed3071a2a61551d2d).

Bugfixes:
- [Fix values and gradients of non-differentiable `kernel_fn` at zero inputs to be consistent with finite-width kernels, and how JAX defines gradients of non-differentiable functions to be the mean sub-gradient](https://github.com/google/neural-tangents/commit/31ab161d923cfd9bf82a9e8d234744ce29fb99ed), see also #123.
- [Fix wrong treatment of `b_std=None` in the infinite-width limit with `parameterization='standard'`](https://github.com/google/neural-tangents/commit/a3b2a8c5bcdcb1d168e2b8290d2d96fe768188d6), see also #123.
- [Fix a bug in `nt.batch` when `x2 = None` and inputs are PyTrees](https://github.com/google/neural-tangents/commit/ae01c9f487598f6350e5efcd8bcdb701c2dd10b4).

Breaking changes:
- Bump requirements to `jax==0.3` and `frozendict==2.3`.

0.3.9

New Features:
* [New nonlinearities](0409a42fadd9e6da1ec6680c23bddfd64e9d32ce):
* [`stax.Hermite`](https://neural-tangents.readthedocs.io/en/latest/neural_tangents.stax.html#neural_tangents.stax.Hermite);
* [`stax.Exp`](https://neural-tangents.readthedocs.io/en/latest/neural_tangents.stax.html#neural_tangents.stax.Exp);
* [`stax.Gaussian`](https://neural-tangents.readthedocs.io/en/latest/neural_tangents.stax.html#neural_tangents.stax.Gaussian);
* [`stax.ExpNormalized`](https://neural-tangents.readthedocs.io/en/latest/neural_tangents.stax.html#neural_tangents.stax.ExpNormalized).
* [Support and default to `b_std=None` in `stax` layers, treated as symbolic zero, i.e. providing same behavior as `b_std=0.`, but without creating a redundant bias array.](7d01d6513bf7bce5d227aa9f223eb8353cc8c74b)

Breaking changes:
* [Bump requirements to JAX v0.2.25](65d80277e4f2d2b0c285bca52937a3248200877f). In consequence, drop CUDA 10 support to prevent https://github.com/google/neural-tangents/issues/122
* [The `b_std=None` change could be breaking in very rare edge cases. The dummy bias array is replaced with `None` in this case and might potentially break your serialization routine.](7d01d6513bf7bce5d227aa9f223eb8353cc8c74b)

0.3.8

New Features:

* [`stax.Elementwise`](https://github.com/google/neural-tangents/commit/25788a98b4a93b80f4f695247c745453baa48bc5) - a layer for generic elementwise functions requiring the user to specify _only_ scalar-valued `nngp_fn : (cov12, var1, var2) |-> E[fn(x_1) * fn(x_2)]`. The NTK computation (thanks to SiuMath) and vectorization over the underlying `Kernel` happen automatically under the hood. If you can't derive the `nngp_fn` for your function, use [`stax.ElementwiseNumerical`](https://neural-tangents.readthedocs.io/en/latest/neural_tangents.stax.html#neural_tangents.stax.ElementwiseNumerical). See [docs](https://neural-tangents.readthedocs.io/en/latest/neural_tangents.stax.html#neural_tangents.stax.Elementwise) for more details.

Bugfixes:

* Compatibility with [JAX 0.2.21](https://github.com/google/jax/releases/tag/jax-v0.2.21).

**Full Changelog**: https://github.com/google/neural-tangents/compare/v0.3.7...v0.3.8

0.3.7

New Features:
* [`nt.stax.Cos`](68e8df0ca2c0007b535ef8cd85c3a0d5a5392b68)
* [`nt.stax.ImageResize`](f8a964feab46a96426440f8998c9039403f2a1d6)
* [New implementation `implementation="SPARSE"` in `nt.stax.Aggregate` for efficient handling of sparse graphs (see 86, 9)](b29337daf9a4e1f5b817f1689809021f52385f02)
* [Support `approximate=True` in `nt.stax.Gelu`](6ab76aa3ed072f6c34bb61784178eb2d8b85c2cb)

Bugfixes:
* [Fix a bug that might alter `Kernel` requirements](199b077fd0c6d4267db8e927a0c3c2a4e0a095cd)
* [Fix `nt.batch` handling of `diagonal_axes` (see 87)](fd1611660c87edcb0c2e50403f691b60d2cc252b)
* [Remove the frequent but redundant warning about type conversion in `kernel_fn`](b6eede90b7e89208e95d4154e0586c80a69d42a3)
* [Minor fixes to documentation and code clean-up](https://github.com/google/neural-tangents/commit/8ca8b985f13ad431eb818e7e5f8986f693651de7#diff-0c18e3e747635221997019d022bea51886f28fa1f153e93f50069b763ee83710)

Breaking changes:
* [Parameters initialized by `init_fn` now follow the setting of `JAX_ENABLE_X64` instead of always defaulting to 32-bit (see 112)](https://github.com/google/jax/commit/693d2e20cf40e17b567c4a252f37a4d6b9366e5d)
* [Drop python 3.6 support and add python 3.9 support](https://github.com/google/neural-tangents/commit/42cf4d55d4ab7525bd183b3f9e4d7dd889c3f810)

0.3.6

New Features:
* [`nt.stax.Sign`](https://github.com/google/neural-tangents/commit/dacd4f9c5531e93b4a0b70b9102414391c2f7b16)
* [Allow to pass a `to_dense` function to `nt.stax.Aggregate` to allow storing the entire graph in a sparse format in GNNs.](https://github.com/google/neural-tangents/commit/9bb8816c7953aa97630a7984f9eae7a59c472d8f) See #86.
* [Support `get="ntkgp"` in `nt.predict.gp_inference (thanks bobby-he!).](https://github.com/google/neural-tangents/commit/ae3af8aa677f71416a56c45545c593ca18060ce7) See #93 and https://arxiv.org/abs/2007.05864.

Bugfixes:
* [Improve numerical stability of differentiating nonlinearities, and avoid `NaN`s, notably in `nt.stax.Relu`](https://github.com/google/neural-tangents/commit/b1e750cb86f98e73cfe426494e8c647a271df928). See #88 and 73.
* [Allow to pass different test/train `kwargs` in `nt.predict.gradient_descent_mse_ensemble`](https://github.com/google/neural-tangents/commit/2d199177911f6939ad850be3f745f28f5b48f612). See #79.

0.3.5

New features:
- [Major speedup of the empirical NTK via `vmap_axes` ](https://github.com/google/neural-tangents/commit/f15b6528a47a73b1940f069309e69111b5235e13) - please see https://neural-tangents.readthedocs.io/en/latest/neural_tangents.empirical.html and discussion in #30
- [Allow to compute maximum theoretical learning rate for a momentum optimizer](https://github.com/google/neural-tangents/commit/0916a4ff28be66c1664d331ca2aa2805340abbe3)
- [Add an IMDB sentiment analysis example](https://github.com/google/neural-tangents/commit/49ace9611e3f459fe3fe91ba32d8748642d19e57)
- [Allow pytrees as outputs of functions to linearize/taylorize](https://github.com/google/neural-tangents/commit/e639f6a857f2d588032b7f9e16144ee5e74846aa)

Breaking changes:
- [Fuse `nt.empirical_direct_ntk_fn`, `nt.empirical_ntk_fn` into a single `nt.empirical_direct_ntk_fn` accepting `implementation=1/2` argument (`1` - direct, default; `2` - implicit)](https://github.com/google/neural-tangents/commit/f15b6528a47a73b1940f069309e69111b5235e13)
- [Rename `nt.stax.NumericalActivation` into `nt.stax.ElementwiseNumerical`](https://github.com/google/neural-tangents/commit/8ac41614c60824ffd654b518c4c21d18f82b2945)

[Minor bugfixes](https://github.com/google/neural-tangents/commit/06e8ffed34547f833d224899a0facb644a1e10be)

Page 2 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.