Nengo-dl

Latest version: v3.6.0

Safety actively analyzes 629436 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 6

3.4.0

-------------------------

*Compatible with Nengo 3.0.0 - 3.1.0*

*Compatible with TensorFlow 2.2.0 - 2.4.0*

**Added**

- Added support for `KerasSpiking <https://www.nengo.ai/keras-spiking/>`_ layers in
the Converter. (`182`_)
- Added support for ``tf.keras.layers.TimeDistributed`` in the Converter. (`182`_)
- Added support for TensorFlow 2.4. (`185`_)
- Added support for Nengo 3.1. (`187`_)

**Changed**

- Minor improvements to build speed by building constants outside of TensorFlow.
(`173`_)
- Support for PES implementation changes in Nengo core (see
`1627 <https://github.com/nengo/nengo/pull/1627>`__ and
`1640 <https://github.com/nengo/nengo/pull/1640>`__). (`#181`_)

**Fixed**

- Global default Keras dtype will now be reset correctly when an exception occurs
in a Simulator method outside the ``with Simulator`` context. (`173`_)
- Support new LinearFilter step type introduced in Nengo core (see
`1629 <https://github.com/nengo/nengo/pull/1629>`__). (`#173`_)
- Fixed a bug when slicing multi-dimensional Signals (e.g. Ensemble encoders).
(`181`_)
- Fixed a bug when loading weights saved in a different Python version. (`187`_)

.. _173: https://github.com/nengo/nengo-dl/pull/173
.. _181: https://github.com/nengo/nengo-dl/pull/181
.. _182: https://github.com/nengo/nengo-dl/pull/182
.. _185: https://github.com/nengo/nengo-dl/pull/185
.. _187: https://github.com/nengo/nengo-dl/pull/187

3.3.0

-----------------------

*Compatible with Nengo 3.0.0*

*Compatible with TensorFlow 2.2.0 - 2.3.0*

**Added**

- Added support for new Nengo core ``NeuronType`` state implementation. (`159`_)
- Compatible with TensorFlow 2.3.0. (`159`_)
- Added support for ``nengo.Tanh``, ``nengo.RegularSpiking``,
``nengo.StochasticSpiking``, and ``nengo.PoissonSpiking`` neuron types. (`159`_)
- Added ``nengo_dl.configure_settings(learning_phase=True/False)`` configuration
option. This mimics the previous behaviour of
``tf.keras.backend.learning_phase_scope`` (which was deprecated by TensorFlow). That
is, if you would like to override the default behaviour so that, e.g., ``sim.predict``
runs in training mode, set ``nengo_dl.configure_settings(learning_phase=True)``.
(`163`_)

**Changed**

- ``Simulator.evaluate`` no longer prints any information to stdout in TensorFlow
2.2 in graph mode (due to a TensorFlow issue, see
https://github.com/tensorflow/tensorflow/issues/39456). Loss/metric values
will still be returned from the function as normal. (`153`_)
- A warning will now be raised if activation types are passed to
``Converter.swap_activations`` that aren't actually in the model. (`168`_)
- Updated TensorFlow installation instruction in documentation. (`170`_)
- NengoDL will now use TensorFlow's eager mode by default. The previous graph-mode
behaviour can be restored by calling ``tf.compat.v1.disable_eager_execution()``, but
we cannot guarantee that that behaviour will be supported in the future. (`163`_)
- NengoDL will now use TensorFlow's "control flow v2" by default. The previous
behaviour can be restored by calling ``tf.compat.v1.disable_control_flow_v2()``, but
we cannot guarantee that that behaviour will be supported in the future. (`163`_)
- NengoDL will now default to allowing TensorFlow's "soft placement" logic, meaning
that even if you specify an explicit device like ``"/gpu:0"``, TensorFlow may not
allocate an op to that device if there isn't a compatible implementation available.
The previous behaviour can be restored by calling
``tf.config.set_soft_device_placement(False)``. (`163`_)
- Internal NengoDL ``OpBuilder`` classes now separate the "pre build" stage from
``OpBuilder.__init__`` (so that the same ``OpBuilder`` class can be re-used across
multiple ``calls``, rather than instantiating a new ``OpBuilder`` each time). Note
that this has no impact on front-end users, this is
only relevant to anyone that has implemented a custom build class. The
logic that would previously have gone in ``OpBuilder.__init__`` should now go in
``OpBuilder.build_pre``. In addition, the ``ops`` argument has been removed
from ``OpBuilder.build_pre``; that will be passed to ``OpBuilder.__init__`` (
and will be available in ``build_pre`` as ``self.ops``). Similarly, the ``ops`` and
``config`` argument have been removed from ``build_post``, and can instead be
accessed through ``self.ops/config``. (`163`_)
- Minimum TensorFlow version is now 2.2.0. (`163`_)

**Fixed**

- Support Sparse transforms in ``Simulator.get_nengo_params``. (`149`_)
- Fixed bug in TensorGraph log message when logging was enabled. (`151`_)
- Updated the KerasWrapper class in the ``tensorflow-models`` example to fix
a compatibility issue in TensorFlow 2.2. (`153`_)
- Handle Nodes that are not connected to anything else, but are probed (this only
occurs in Nengo>=3.1.0). (`159`_)
- More robust support for converting nested Keras models in TensorFlow 2.3. (`161`_)
- Fix bug when probing slices of certain probeable attributes (those that are
directly targeting a Signal in the model). (`164`_)

**Removed**

- Removed ``nengo_dl.utils.print_op`` (use ``tf.print`` instead). (`163`_)

.. _149: https://github.com/nengo/nengo-dl/pull/149
.. _151: https://github.com/nengo/nengo-dl/pull/151
.. _153: https://github.com/nengo/nengo-dl/pull/153
.. _159: https://github.com/nengo/nengo-dl/pull/159
.. _161: https://github.com/nengo/nengo-dl/pull/161
.. _163: https://github.com/nengo/nengo-dl/pull/163
.. _164: https://github.com/nengo/nengo-dl/pull/164
.. _168: https://github.com/nengo/nengo-dl/pull/168
.. _170: https://github.com/nengo/nengo-dl/pull/170

3.2.0

---------------------

*Compatible with Nengo 3.0.0*

*Compatible with TensorFlow 2.0.0 - 2.2.0*

**Added**

- Added ``nengo_dl.LeakyReLU`` and ``nengo_dl.SpikingLeakyReLU`` neuron models.
(`126`_)
- Added support for leaky ReLU Keras layers to ``nengo_dl.Converter``. (`126`_)
- Added a new ``remove_reset_incs`` graph simplification step. (`129`_)
- Added support for UpSampling layers to ``nengo_dl.Converter``. (`130`_)
- Added tolerance parameters to ``nengo_dl.Converter.verify``. (`130`_)
- Added ``scale_firing_rates`` option to ``nengo_dl.Converter``. (`134`_)
- Added ``Converter.layers`` attribute which will map Keras layers/tensors to
the converted Nengo objects, to make it easier to access converted components.
(`134`_)
- Compatible with TensorFlow 2.2.0. (`140`_)
- Added a new ``synapse`` argument to the Converter, which can be used to automatically
add synaptic filters on the output of neural layers during the conversion process.
(`141`_)
- Added a `new example <https://www.nengo.ai/nengo-dl/examples/keras-to-snn.html>`__
demonstrating how to use the NengoDL Converter to convert a Keras model to a spiking
Nengo network. (`141`_)

**Changed**

- Re-enabled the ``remove_constant_copies`` graph simplification by default. (`129`_)
- Reduced the amount of state that needs to be stored in the simulation. (`129`_)
- Added more information to the error message when loading saved parameters that
don't match the current model. (`129`_)
- More efficient implementation of convolutional biases in the Converter. (`130`_)
- Saved simulator state will no longer be included in ``Simulator.keras_model.weights``.
This means that ``Simulator.keras_model.save/load_weights`` will not include the
saved simulator state, making it easier to reuse weights between models (as long as
the models have the same weights, they do not need to have the same state variables).
``Simulator.save/load_params(..., include_state=True)`` can be used to explicitly
save the simulator state, if desired. (`140`_)
- Model parameters (e.g., connection weights) that are not trainable (because they've
been marked non-trainable by user or targeted by an online learning rule) will now
be treated separately from simulator state. For example,
``Simulator.save_params(..., include_state=False)`` will still include those
parameters, and the results of any online learning will persist between calls even
with ``stateful=False``. (`140`_)
- Added ``include_probes``, ``include_trainable``, and ``include_processes`` arguments
to ``Simulator.reset`` to provide more fine-grained control over Simulator
resetting. This replicates the previous functionality in ``Simulator.soft_reset``.
(`139`_)
- More informative error messages when accessing invalid Simulator functionality after
the Simulator has been closed. (`139`_)
- A warning is now raised when the number of input data items passed to the simulator
does not match the number of input nodes, to help avoid unintentionally passing
data to the wrong input node. This warning can be avoided by passing data for
all nodes, or using the dictionary input style if you want to only pass data for
a specific node. (`139`_)
- Dictionaries returned by ``sim.predict/evaluate`` will now be ordered. (`141`_)

**Fixed**

- Fixed bug in error message when passing data with batch size less than Simulator
minibatch size. (`139`_)
- More informative error message when ``validation_split`` does not result in batch
sizes evenly divisible by minibatch size. (`139`_)
- Added ``tensorflow-cpu`` distributions to installation checks (so Nengo DL will
not attempt to reinstall TensorFlow if ``tensorflow-cpu`` is already installed).
(`142`_)
- Fixed bug when applying the Converter to Keras models that re-use intermediate
layers as output layers. (`137`_)
- Fixed bug in conversion of Keras Dense layers with non-native activation functions.
(`144`_)

**Deprecated**

- Renamed ``Simulator.save/load_params`` ``include_non_trainable`` parameter to
``include_state``. (`140`_)
- ``Simulator.soft_reset`` has been deprecated. Use
``Simulator.reset(include_probes=False, include_trainable=False,
include_processes=False)`` instead. (`139`_)

.. _126: https://github.com/nengo/nengo-dl/pull/126
.. _129: https://github.com/nengo/nengo-dl/pull/129
.. _130: https://github.com/nengo/nengo-dl/pull/130
.. _134: https://github.com/nengo/nengo-dl/pull/134
.. _137: https://github.com/nengo/nengo-dl/pull/137
.. _139: https://github.com/nengo/nengo-dl/pull/139
.. _140: https://github.com/nengo/nengo-dl/pull/140
.. _141: https://github.com/nengo/nengo-dl/pull/141
.. _142: https://github.com/nengo/nengo-dl/pull/142
.. _144: https://github.com/nengo/nengo-dl/pull/144

3.1.0

---------------------

*Compatible with Nengo 3.0.0*

*Compatible with TensorFlow 2.0.0 - 2.1.0*

**Added**

- Added ``inference_only=True`` option to the Converter, which will allow some
Layers/parameters that cannot be fully converted to native Nengo objects to be
converted in a way that only matches the inference behaviour of the source Keras model
(not the training behaviour). (`119`_)

**Changed**

- Improved build time of networks containing lots of ``TensorNodes``. (`119`_)
- Improved memory usage of build process. (`119`_)
- Saved simulation state may now be placed on GPU (this should improve the speed of
state updates, but may slightly increase GPU memory usage). (`119`_)
- Changed Converter ``freeze_batchnorm=True`` option to ``inference_only=True``
(effect of the parameter is the same on BatchNormalization layers, but also has
broader effects). (`119`_)
- The precision of the Nengo core build process will now be set based on the
``nengo_dl.configure_settings(dtype=...)`` config option. Note that this will
override the default precision set in ``nengo.rc``. (`119`_)
- Minimum Numpy version is now 1.16.0 (required by TensorFlow). (`119`_)
- Added support for the new ``transform=None`` default in Nengo connections
(see `Nengo1591`_). Note that this may change the number of trainable
parameters in a network as the scalar default ``transform=1`` weights on
non-Ensemble connections will no longer be present. (`128`_)

**Fixed**

- Provide a more informative error message if Layer ``shape_in``/``shape_out`` contains
undefined (``None``) elements. (`119`_)
- Fixed bug in ``Converter`` when source model contains duplicate nodes. (`119`_)
- Fixed bug in ``Converter`` for ``Concatenate`` layers with ``axis != 1``. (`119`_)
- Fixed bug in ``Converter`` for models containing passthrough ``Input`` layers inside
submodels. (`119`_)
- Keras Layers inside TensorNodes will be called with the ``training`` argument set
correctly (previously it was always set to the default value). (`119`_)
- Fixed compatibility with ``progressbar2`` version 3.50.0. (`136`_)

.. _119: https://github.com/nengo/nengo-dl/pull/119
.. _128: https://github.com/nengo/nengo-dl/pull/128
.. _136: https://github.com/nengo/nengo-dl/pull/136
.. _Nengo1591: https://github.com/nengo/nengo/pull/1591

3.0.0

-------------------------

*Compatible with Nengo 3.0.0*

*Compatible with TensorFlow 2.0.0*

There are a lot of **breaking changes** in NengoDL 3.0. See the `migration guide
<https://www.nengo.ai/nengo-dl/migration-guide.html#nengodl-2-to-3>`_ for all the
details.

**Added**

- Keras ``Layer`` classes can now be used with ``nengo_dl.Layer/tensor_layer``.
- ``TensorGraph`` can now be used as a Keras ``Layer``.
- Added ``Simulator.predict/evaluate/fit`` functions, which
implement the Keras
`Model API <https://www.tensorflow.org/api_docs/python/tf/keras/Model>`_.
- Added a warning that changing the TensorFlow seed (e.g. on ``Simulator.reset``) will
not affect any existing TensorFlow operations (this was always true in TensorFlow,
the warning is just to help avoid confusion).
- Added ``TensorGraph.build_inputs``, which will return a set of Keras ``Input`` layers
that can be used as input to the TensorGraph layer itself.
- Added ``nengo_dl.callbacks.TensorBoard``. This is identical to
``tf.keras.callbacks.TensorBoard``, except it will also perform profiling during
inference (rather than only during training).
- Added ``stateful`` option to ``Simulator.run`` which can be set to False to avoid
updating the saved simulation state at the end of a run.
- Added ``nengo_dl.configure_settings(stateful=False)`` option to avoid building the
parts of the model responsible for preserving state between executions (this will
override any ``stateful=True`` arguments in individual functions).
- Added ``nengo_dl.configure_settings(use_loop=False)`` option to avoid building the
simulation inside a symbolic TensorFlow loop. This may improve simulation speed,
but the simulation can only run for exactly ``unroll_simulation`` timesteps.
- NengoDL now requires ``jinja2`` (used to template some of the docstrings).
- Added an ``inputs`` argument to ``Simulator.check_gradients``, which can be used to
control the initial value of input Nodes during the gradient calculations.
- Added ``nengo_dl.Converter`` for automatically converting Keras models to native
Nengo networks. See `the documentation
<https://www.nengo.ai/nengo-dl/converter.html>`__ for more details.
- Added `Legendre Memory Unit RNN example
<https://www.nengo.ai/nengo-dl/examples/lmu.html>`_.

**Changed**

- Minimum TensorFlow version is now 2.0.0.
- ``Simulator.save/load_params`` now uses a single
``include_non_trainable=True/False`` (equivalent to the previous
``include_local``). Trainable parameters will always be saved, so the
``include_global`` argument is removed.
- Standardized all signals/operations in a simulation to be batch-first.
- The `dtype option <https://www.nengo.ai/nengo-dl/config.html#dtype>`_ is now specified
as a string (e.g. ``"float32"`` rather than ``tf.float32``).
- If the requested number of simulation steps is not evenly divisible by
``Simulator.unroll_simulation`` then probe values and ``sim.time/n_steps`` will be
updated based on the number of steps actually run (rather than the requested
number of steps). Note that these extra steps were also run previously, but their
results were hidden from the user.
- Renamed ``TensorGraph.input_ph`` to ``TensorGraph.node_inputs``.
- ``Simulator.time/n_steps`` are now read-only.
- ``Simulator.n_steps/time`` are now managed as part of the op graph, rather than
manually in the Simulator.
- Renamed ``nengo_dl.objectives`` to ``nengo_dl.losses`` (to align with ``tf.losses``).
- ``nengo_dl.objectives.Regularize`` now takes two arguments (``y_true`` and ``y_pred``)
in order to be compatible with the ``tf.losses.Loss`` API (``y_true`` is ignored).
- The `remove_constant_copies
<https://www.nengo.ai/nengo-dl/reference.html#nengo_dl.graph_optimizer.remove_constant_copies>`_
simplification step is now disabled by default.
In certain situations this could be an unsafe manipulation (specifically,
when using ``Simulator.save/load_params`` it could change which parameters are saved).
It can be manually re-enabled through the
`simplifications <https://www.nengo.ai/nengo-dl/config.html#simplifications>`_
configuration option.
- ``Simulator.check_gradients`` now only accepts an optional list of Probes (no longer
accepts arbitrary Tensors).
- Eager execution is no longer disabled on import (it is still disabled within the
Simulator context, for performance reasons; see
https://github.com/tensorflow/tensorflow/issues/33052).
- ``nengo_dl.tensor_layer(x, func, ...)`` now passes any extra kwargs to the
``nengo_dl.TensorNode`` constructor (rather than to ``func``). If you need to pass
information to ``func`` consider using partial functions (e.g.
``tensor_layer(functools.partial(x, func, arg=5), ...)`` or a callable class
(e.g., ``tensor_layer(x, MyFunc(arg=5), ...))``. When using Keras Layers with
``nengo_dl.tensor_layer``, a fully instantiated Layer
object should be passed rather than a Layer class (e.g., use
``tensor_layer(x, tf.keras.layers.Dense(units=10), ...)`` instead of
``tensor_layer(x, tf.keras.layers.Dense, units=10)``).
- ``benchmarks.run_profile`` now uses the TensorBoard format when profiling,
see `the documentation
<https://www.tensorflow.org/tensorboard/tensorboard_profiling_keras>`_ for
instructions on how to view this information (the information is the same, it is
just accessed through TensorBoard rather than requiring that it be loaded directly
in a Chrome browser).
- ``nengo_dl.TensorNode`` now takes ``shape_in`` and ``shape_out`` arguments (which
specify a possibly multidimensional shape), rather
than the scalar ``size_in`` and ``size_out``.
- ``TensorNode`` functions no longer use the ``pre_build``/``post_build`` functionality.
If you need to implement more complex behaviour in a TensorNode, use a
custom Keras Layer subclass instead. For example, TensorNodes Layers can create new
parameter Variables inside the Layer ``build`` method.
- ``TensorNode`` now has an optional ``pass_time`` parameter which can be set to
``False`` to disable passing the current simulation time to the TensorNode function.
- Added ``nengo_dl.Layer``. Similar to the old ``nengo_dl.tensor_layer``, this is a
wrapper for constructing TensorNodes, but it mimics the new ``tf.keras.layers.Layer``
API rather than the old ``tf.layers``.
- TensorFlow's "control flow v2" is disabled on import, for performance reasons; see
https://github.com/tensorflow/tensorflow/issues/33052.
- Renamed ``nengo_dl.objectives.mse`` to ``nengo_dl.losses.nan_mse`` (to emphasize
the special logic it provides for ``nan`` targets).
- Connections created by ``nengo_dl.Layer/tensor_layer`` will be marked as
non-trainable by default.
- Updated all documentation and examples for the new syntax (in particular, see the
updated `Coming from TensorFlow
<https://www.nengo.ai/nengo-dl/examples/from-tensorflow.html#>`_ tutorial and
`TensorFlow/Keras integration
<https://www.nengo.ai/nengo-dl/examples/tensorflow-models.html>`_ example, and the
new `Tips and tricks <https://www.nengo.ai/nengo-dl/tips.html>`_ page).
- The training/inference build logic (e.g., swapping spiking neurons with rate
implementations) can be overridden by setting the global Keras learning phase
(``tf.keras.backend.set_learning_phase``) before the Simulator is constructed.
- Increased minimum Nengo core version to 3.0.0.
- Reduced size of TensorFlow constants created by Reset ops.
- DotInc operators with different signal sizes will no longer be merged (these
merged operators had to use a less efficient sparse matrix multiplication, and in
general this cost outweighed the benefit of merging).
- Trainability can now be configured in the config of subnetworks. This replaces
the ability to mark Networks as (non)trainable. See the `updated documentation
<https://www.nengo.ai/nengo-dl/config.html#trainable>`__ for details.
- Training/evaluation target data can now have a different number of timesteps than
input data (as long as it aligns with the number of timesteps expected by the
loss function).
- Whether or not to display progress bars in ``Simulator.run`` and
``Simulator.run_steps`` now defaults to the value of
``Simulator(..., progress_bar=x)``.

**Fixed**

- Fixed bug due to non-determinism of Process state ordering in Python 3.5.
- Nested Keras layers passed to TensorNode will be rebuilt correctly if necessary.

**Deprecated**

- ``nengo_dl.tensor_layer`` has been deprecated. Use ``nengo_dl.Layer`` instead;
``tensor_layer(x, func, **kwargs)`` is equivalent to ``Layer(func)(x, **kwargs)``.

**Removed**

- Removed the `session_config
<https://www.nengo.ai/nengo-dl/v2.2.1/config.html#session-config>`_ configuration
option. Use the `updated TensorFlow config system
<https://www.tensorflow.org/api_docs/python/tf/config>`_ instead.
- Removed the deprecated ``nengo_dl.Simulator(..., dtype=...)`` argument. Use
``nengo_dl.configure_settings(dtype=...)`` instead.
- Removed the deprecated ``Simulator.run(..., input_feeds=...)`` argument. Use
``Simulator.run(..., data=...)`` instead.
- Removed the ``Simulator.sess`` attribute (Sessions are no longer used in
TensorFlow 2.0). The underlying Keras model (``Simulator.keras_model``) should be
used as the entrypoint into the engine underlying a Simulator instead.
- Removed the ``Simulator.loss`` function (use ``Simulator.compile`` and
``Simulator.evaluate`` to compute loss values instead).
- Removed the ``Simulator.train`` function (use ``Simulator.compile`` and
``Simulator.fit`` to optimize a network instead).
- Removed the ``nengo_dl.objectives.Regularize(weight=x, ...)`` argument. Use the
``Simulator.compile(loss_weights=...)`` functionality instead.
- Removed the ``Simulator.run(..., extra_feeds=...)`` argument. TensorFlow 2.0 no longer
uses the Session/feed execution model.
- Removed ``Simulator.run_batch``. This functionality is now managed by the underlying
``Simulator.keras_model``.
- Removed ``TensorGraph.training_step``. The training step is now managed by Keras.
- Removed ``TensorGraph.build_outputs`` and ``TensorGraph.build_optimizer_func``.
Building loss functions/optimizers is now managed by Keras.
- Removed ``nengo_dl.utils.find_non_differentiable`` (this no longer works in TF2.0's
eager mode).
- Removed ``Simulator(..., tensorboard=...)`` argument. Use the Keras TensorBoard
callback approach for TensorBoard logging instead (see
``tf.keras.callbacks.TensorBoard`` or ``nengo_dl.callbacks.NengoSummaries``).
- NengoDL will no longer monkeypatch fix the ``tf.dynamic_stitch`` gradients on import.
The gradients are still incorrect (see
https://github.com/tensorflow/tensorflow/issues/7397), but we no longer use this
operation within NengoDL so we leave it up to the user to fix it in their own code
if needed.
- Removed ``benchmarks.matmul_vs_reduce``. We use matmul for everything now, so this
comparison is no longer necessary.
- Removed ``utils.minibatch_generator`` (training/inference loops are now managed
by Keras).

2.2.2

-------------------------

*Compatible with Nengo 2.8.0 - 3.0.0*

*Compatible with TensorFlow 1.4.0 - 2.0.0*

**Fixed**

- Compatibility with Nengo 3.0 release

Page 2 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.