Netket

Latest version: v3.12.0

Safety actively analyzes 629765 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 6

3.8

This is the last NetKet release to support Python 3.7 and Jax 0.3.
Starting with NetKet 3.9 we will require Jax 0.4, which in turns requires Python 3.8 (and soon 3.9).

New features
* {class}`netket.hilbert.TensorHilbert` has been generalised and now works with both discrete, continuous or a combination of discrete and continuous hilbert spaces [1437](https://github.com/netket/netket/pull/1437).
* NetKet is now compatible with Numba 0.57 and therefore with Python 3.11 [1462](https://github.com/netket/netket/pull/1462).
* The new Metropolis sampling transition proposal rules {func}`netket.sampler.rules.MultipleRules` has been added, which can be used to pick from different transition proposals according to a certain probability distribution.
* The new Metropolis sampling transition proposal rules {func}`netket.sampler.rules.TensorRule` has been added, which can be used to combine different transition proposals acting on different subspaces of the Hilbert space together.
* The new Metropolis sampling transition proposal rules {func}`netket.sampler.rules.FixedRule` has been added, which does not change the configuration.

Deprecations
* The non-public API function to select the default QGT mode for `QGTJacobian`, located at `nk.optimizer.qgt.qgt_jacobian_common.choose_jacobian_mode` has been renamed and made part of the public API of as `nk.jax.jacobian_default_mode`. If you were using this function, please update your codes [1473](https://github.com/netket/netket/pull/1473).

Bug Fixes
* Fix issue [1435](https://github.com/netket/netket/issues/1435), where a 0-tangent originating from integer samples was not correctly handled by {func}`nk.jax.vjp` [#1436](https://github.com/netket/netket/pull/1436).
* Fixed a bug in {class}`netket.sampler.rules.LangevinRule` when setting `chunk_size` [1465](https://github.com/netket/netket/pull/1465).

Improvements
* {class}`netket.operator.ContinuousOperator` has been improved and now they correctly test for equality and generate a consistent hash. Moreover, the internal logic of {class}`netket.operator.SumOperator` and {class}`netket.operator.Potential` has been improved, and they lead to less recompilations when constructed again but identical. A few new attributes for those operators have also been exposed [1440](https://github.com/netket/netket/pull/1440).
* {func}`nk.nn.to_array` accepts an optional keyword argument `chunk_size`, and related methods on variational states now use the chunking specified in the variational state when generating the dense array [1470](https://github.com/netket/netket/pull/1470).

Breaking Changes
* Jax version `0.4` is now required, meaning that NetKet no longer works on Python 3.7.

3.7

New features
* Input and hidden layer masks can now be specified for {class}`netket.models.GCNN` [1387](https://github.com/netket/netket/pull/1387).
* Support for Jax 0.4 added [1416](https://github.com/netket/netket/pull/1416).
* Added a continuous space langevin-dynamics transition rule {class}`netket.sampler.rules.LangevinRule` and its corresponding shorthand for constructing the MCMC sampler {func}`netket.sampler.MetropolisAdjustedLangevin` [1413](https://github.com/netket/netket/pull/1413).
* Added an experimental Quantum State Reconstruction driver at {class}`netket.experimental.QSR` to reconstruct states from data coming from quantum computers or simulators [1427](https://github.com/netket/netket/pull/1427).
* Added `netket.nn.blocks.SymmExpSum` flax module that symmetrizes a bare neural network module by summing the wave-function over all possible symmetry-permutations given by a certain symmetry group [1433](https://github.com/netket/netket/pull/1433).

Breaking Changes
* Parameters of models {class}`netket.models.GCNN` and layers {class}`netket.nn.DenseSymm` and {class}`netket.nn.DenseEquivariant` are stored as an array of shape '[features,in_features,mask_size]'. Masked parameters are now excluded from the model instead of multiplied by zero [1387](https://github.com/netket/netket/pull/1387).

Improvements
* The underlying extension API for Autoregressive models that can be used with Ancestral/Autoregressive samplers has been simplified and stabilized and will be documented as part of the public API. For most models, you should now inherit from {class}`netket.models.AbstractARNN` and define the method {meth}`~netket.models.AbstractARNN.conditionals_log_psi`. For additional performance, implementers can also redefine {meth}`~netket.models.AbstractARNN.__call__` and {meth}`~netket.models.AbstractARNN.conditional` but this should not be needed in general. This will cause some breaking changes if you were relying on the old undocumented interface [1361](https://github.com/netket/netket/pull/1361).
* {class}`netket.operator.PauliStrings` now works with non-homogeneous Hilbert spaces, such as those obtained by taking the tensor product of multiple Hilbert spaces [1411](https://github.com/netket/netket/pull/1411).
* The {class}`netket.operator.LocalOperator` now keep sparse matrices sparse, leading to faster algebraic manipulations of those objects. The overall computational and memory cost is, however, equivalent, when running VMC calculations. All pre-constructed operators such as {func}`netket.operator.spin.sigmax` and {func}`netket.operator.boson.create` now build sparse-operators [1422](https://github.com/netket/netket/pull/1422).
* When multiplying an operator by it's conjugate transpose NetKet does not return anymore a lazy {class}`~netket.operator.Squared` object if the operator is hermitian. This avoids checking if the object is hermitian which greatly speeds up algebric manipulations of operators, and returns more unbiased epectation values [1423](https://github.com/netket/netket/pull/1423).

Bug Fixes
* Fixed a bug where {meth}`nk.hilbert.Particle.random_state` could not be jit-compiled, and therefore could not be used in the sampling [1401](https://github.com/netket/netket/pull/1401).
* Fixed bug [1405](https://github.com/netket/netket/pull/1405) where {meth}`nk.nn.DenseSymm` and {meth}`nk.models.GCNN` did not work or correctly consider masks [#1428](https://github.com/netket/netket/pull/1428).

Deprecations
* {meth}`netket.models.AbstractARNN._conditional` has been removed from the API, and its use will throw a deprecation warning. Update your ARNN models accordingly! [1361](https://github.com/netket/netket/pull/1361).
* Several undocumented internal methods from {class}`netket.models.AbstractARNN` have been removed [1361](https://github.com/netket/netket/pull/1361).

3.6

New features
* Added a new 'Full statevector' model {class}`netket.models.LogStateVector` that stores the exponentially large state and can be used as an exact ansatz [1324](https://github.com/netket/netket/pull/1324).
* Added a new experimental {class}`~netket.experimental.driver.TDVPSchmitt` driver, implementing the signal-to-noise ratio TDVP regularisation by Schmitt and Heyl [1306](https://github.com/netket/netket/pull/1306).
* Added a new experimental {class}`~netket.experimental.driver.TDVPSchmitt` driver, implementing the signal-to-noise ratio TDVP regularisation by Schmitt and Heyl [1306](https://github.com/netket/netket/pull/1306).
* QGT classes accept a `chunk_size` parameter that overrides the `chunk_size` set by the variational state object [1347](https://github.com/netket/netket/pull/1347).
* {func}`~netket.optimizer.qgt.QGTJacobianPyTree` and {func}`~netket.optimizer.qgt.QGTJacobianDense` support diagonal entry regularisation with constant and scale-invariant contributions. They accept a new `diag_scale` argument to pass the scale-invariant component [1352](https://github.com/netket/netket/pull/1352).
* {func}`~netket.optimizer.SR` preconditioner now supports scheduling of the diagonal shift and scale regularisations [1364](https://github.com/netket/netket/pull/1364).

Improvements
* {meth}`~netket.vqs.ExactState.expect_and_grad` now returns a {class}`netket.stats.Stats` object that also contains the variance, as {class}`~netket.vqs.MCState` does [1325](https://github.com/netket/netket/pull/1325).
* Experimental RK solvers now store the error of the last timestep in the integrator state [1328](https://github.com/netket/netket/pull/1328).
* {class}`~netket.operator.PauliStrings` can now be constructed by passing a single string, instead of the previous requirement of a list of strings [1331](https://github.com/netket/netket/pull/1331).
* {class}`~flax.core.frozen_dict.FrozenDict` can now be logged to netket's loggers, meaning that one does no longer need to unfreeze the parameters before logging them [1338](https://github.com/netket/netket/pull/1338).
* Fermion operators are much more efficient and generate fewer connected elements [1279](https://github.com/netket/netket/pull/1279).
* NetKet now is completely PEP 621 compliant and does not have anymore a `setup.py` in favour of a `pyproject.toml` based on [hatchling](https://hatch.pypa.io/latest/). To install NetKet you should use a recent version of `pip` or a compatible tool such as poetry/hatch/flint [#1365](https://github.com/netket/netket/pull/1365).
* {func}`~netket.optimizer.qgt.QGTJacobianDense` can now be used with {class}`~netket.vqs.ExactState` [1358](https://github.com/netket/netket/pull/1358).


Bug Fixes
* {meth}`netket.vqs.ExactState.expect_and_grad` returned a scalar while {meth}`~netket.vqs.ExactState.expect` returned a {class}`netket.stats.Stats` object with 0 error. The inconsistency has been addressed and now they both return a `Stats` object. This changes the format of the files logged when running `VMC`, which will now store the average under `Mean` instead of `value` [1325](https://github.com/netket/netket/pull/1325).
* {func}`netket.optimizer.qgt.QGTJacobianDense` now returns the correct output for models with mixed real and complex parameters [1397](https://github.com/netket/netket/pull/1397)

Deprecations
* The `rescale_shift` argument of {func}`~netket.optimizer.qgt.QGTJacobianPyTree` and {func}`~netket.optimizer.qgt.QGTJacobianDense` is deprecated in favour the more flexible syntax with `diag_scale`. `rescale_shift=False` should be removed. `rescale_shift=True` should be replaced with `diag_scale=old_diag_shift`. [1352](https://github.com/netket/netket/pull/1352).
* The call signature of preconditioners passed to {class}`netket.driver.VMC` and other drivers has changed as a consequence of scheduling, and preconditioners should now accept an extra optional argument `step`. The old signature is still supported but is deprecated and will eventually be removed [1364](https://github.com/netket/netket/pull/1364).

3.5.2

Bug Fixes
* {class}`~netket.operator.PauliStrings` now support the subtraction operator [1336](https://github.com/netket/netket/pull/1336).
* Autoregressive networks had a default activation function (`selu`) that did not act on the imaginary part of the inputs. We now changed that, and the activation function is `reim_selu`, which acts independently on the real and imaginary part. This changes nothing for real parameters, but improves the defaults for complex ones [1371](https://github.com/netket/netket/pull/1371).
* A **major performance degradation** that arose when using {class}`~netket.operator.LocalOperator` has been addressed. The bug caused our operators to be recompiled every time they were queried, imposing a large overhead [1377](https://github.com/netket/netket/pull/1377).

3.5.1

New features
* Added a new configuration option `netket.config.netket_experimental_disable_ode_jit` to disable jitting of the ODE solvers. This can be useful to avoid hangs that might happen when working on GPUs with some particular systems [1304](https://github.com/netket/netket/pull/1304).

Bug Fixes
* Continuous operatorors now work correctly when `chunk_size != None`. This was broken in v3.5 [1316](https://github.com/netket/netket/pull/1316).
* Fixed a bug ([1101](https://github.com/netket/netket/pull/1101)) that crashed NetKet when trying to take the product of two different Hilber spaces. It happened because the logic to build a `TensorHilbert` was ending in an endless loop. [#1321](https://github.com/netket/netket/pull/1321).

3.5

[GitHub commits](https://github.com/netket/netket/compare/v3.4...master).

This release adds support and needed functions to run TDVP for neural networks with real/non-holomorphic parameters, an experimental HDF5 logger, and an `MCState` method to compute the local estimators of an observable for a set of samples.

This release also drops support for older version of flax, while adopting the new interface which completely supports complex-valued neural networks. Deprecation warnings might be raised if you were using some layers from `netket.nn` that are now avaiable in flax.

A new, more accurate, estimation of the autocorrelation time has been introduced, but it is disabled by default. We welcome feedback.

New features

* The method {meth}`~netket.vqs.MCState.local_estimators` has been added, which returns the local estimators `O_loc(s) = 〈s|O|ψ〉 / 〈s|ψ〉` (which are known as local energies if `O` is the Hamiltonian). [1179](https://github.com/netket/netket/pull/1179)
* The permutation equivariant {class}`nk.models.DeepSetRelDistance` for use with particles in periodic potentials has been added together with an example. [1199](https://github.com/netket/netket/pull/1199)
* The class {class}`HDF5Log` has been added to the experimental submodule. This logger writes log data and variational state variables into a single HDF5 file. [1200](https://github.com/netket/netket/issues/1200)
* Added a new method {meth}`~nk.logging.RuntimeLog.serialize` to store the content of the logger to disk [1255](https://github.com/netket/netket/issues/1255).
* New {class}`nk.callbacks.InvalidLossStopping` which stops optimisation if the loss function reaches a `NaN` value. An optional `patience` argument can be set. [1259](https://github.com/netket/netket/pull/1259)
* Added a new method {meth}`nk.graph.SpaceGroupBuilder.one_arm_irreps` to construct GCNN projection coefficients to project on single-wave-vector components of irreducible representations. [1260](https://github.com/netket/netket/issues/1260).
* New method {meth}`~nk.vqs.MCState.expect_and_forces` has been added, which can be used to compute the variational forces generated by an operator, instead of only the (real-valued) gradient of an expectation value. This in general is needed to write the TDVP equation or other similar equations. [1261](https://github.com/netket/netket/issues/1261)
* TDVP now works for real-parametrized wavefunctions as well as non-holomorphic ones because it makes use of {meth}`~nk.vqs.MCState.expect_and_forces`. [1261](https://github.com/netket/netket/issues/1261)
* New method {meth}`~nk.utils.group.Permutation.apply_to_id` can be used to apply a permutation (or a permutation group) to one or more lattice indices. [1293](https://github.com/netket/netket/issues/1293)
* It is now possible to disable MPI by setting the environment variable `NETKET_MPI`. This is useful in cases where mpi4py crashes upon load [1254](https://github.com/netket/netket/issues/1254).
* The new function {func}`nk.nn.binary_encoding` can be used to encode a set of samples according to the binary shape defined by an Hilbert space. It should be used similarly to {func}`flax.linen.one_hot` and works with non homogeneous Hilbert spaces [1209](https://github.com/netket/netket/issues/1209).
* A new method to estimate the correlation time in Markov chain Monte Carlo (MCMC) sampling has been added to the {func}`nk.stats.statistics` function, which uses the full FFT transform of the input data. The new method is not enabled by default, but can be turned on by setting the `NETKET_EXPERIMENTAL_FFT_AUTOCORRELATION` environment variable to `1`. In the future we might turn this on by default [1150](https://github.com/netket/netket/issues/1150).

Dependencies
* NetKet now requires at least Flax v0.5

Deprecations

* `nk.nn.Module` and `nk.nn.compact` have been deprecated. Please use the {class}`flax.linen.Module` and {func}`flax.linen.compact` instead.
* `nk.nn.Dense(dtype=mydtype)` and related Modules (`Conv`, `DenseGeneral` and `ConvGeneral`) are deprecated. Please use `flax.linen.***(param_dtype=mydtype)` instead. Before flax v0.5 they did not support complex numbers properly within their modules, but starting with flax 0.5 they now do so we have removed our linear module wrappers and encourage you to use them. Please notice that the `dtype` argument previously used by netket should be changed to `param_dtype` to maintain the same effect. [...](https://github.com/netket/netket/pull/...)

Bug Fixes
* Fixed bug where a `nk.operator.LocalOperator` representing the identity would lead to a crash. [1197](https://github.com/netket/netket/pull/1197)
* Fix a bug where Fermionic operators {class}`nkx.operator.FermionOperator2nd` would not result hermitian even if they were. [1233](https://github.com/netket/netket/pull/1233)
* Fix serialization of some arrays with complex dtype in `RuntimeLog` and `JsonLog` [1258](https://github.com/netket/netket/pull/1258)
* Fixed bug where the {class}`nk.callbacks.EarlyStopping` callback would not work as intended when hitting a local minima. [1238](https://github.com/netket/netket/pull/1238)
* `chunk_size` and the random seed of Monte Carlo variational states are now serialised. States serialised previous to this change can no longer be unserialised [1247](https://github.com/netket/netket/pull/1247)
* Continuous-space hamiltonians now work correctly with neural networks with complex parameters [1273](https://github.com/netket/netket/pull/1273).
* NetKet now works under MPI with recent versions of jax (>=0.3.15) [1291](https://github.com/netket/netket/pull/1291).

Page 3 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.