Norse

Latest version: v1.1.0

Safety actively analyzes 628903 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 2

0.0.7rc1

This release candidate drafts code for sparse activations and adjoint-based optimizations as described in https://arxiv.org/abs/2009.08378

0.0.6

This release features our shiny and new module API, it unifies all Spiking Neuron modules
under one common base class thereby eliminating redundant code.

From a user perspective it also means that the API is now consistent across all Neuron
types.

0.0.5

This release brings numerous improvements in terms of speed, usability, specializations, documentation and more. In general, we tried to make Norse more user-friendly and applicable for both the die-hard deep-learning expert and neuroscience enthusiasts new to Python. Specifically, this release includes:
* Compatibility with the [PyTorch Lightning](https://pytorchlightning.ai/) library, which means that Norse now scales to multiple GPUs and even supercomputing clusters with [SLURM](https://en.wikipedia.org/wiki/Slurm_Workload_Manager). As an example, see our [`MNIST` task](https://norse.github.io/norse/tasks.html#mnist-in-pytorch-lightning).
* The [`SequentialState`](https://norse.github.io/norse/started.html#using-norse-neurons-as-pytorch-layers) module, which works similar to PyTorch's `Sequential` layers in that it allows for seamless composition of PyTorch *and* Norse modules. Together with the [`Lift`](https://norse.github.io/norse/started.html#using-norse-in-time) module, this is an important step towards powerful and simple tools for developing spiking neural networks.
* As Norse becomes faster to work with, it is also easier to implement more complex models. Norse now features spiking convolutions, [MobileNet](https://arxiv.org/abs/1704.04861) and [VGG](https://arxiv.org/abs/1409.1556) networks which can be used out-of-the box. See the [`norse.torch.models` package](https://norse.github.io/norse/auto_api/norse.torch.models.html) for more information.
* Improved performance. We implemented the LIF neuron equations and the SuperSpike synthetic gradient in C++. All in all, **Norse is roughly twice as fast** as it was before.
* Improved documentation. The main pages and the introductory pages were edited and cleaned up. This is an area we will be improving much more in the future.
* Various bugfixes. Norse is now more stable and useable than before.

As always, we welcome feedback and are looking forward to hearing how you are using Norse! Happy hacking :partying_face:

0.0.4

This release contains a number of functionality and model additions, as well as improved PyTorch compatibility through the [`Lift`](https://github.com/norse/norse/blob/master/norse/torch/module/lift.py#L7) module. Most notably, we

* Added spike-time plasticity
* Added regularization for spiking cells/layers
* Added a layer for Lifting regular PyTorch layers to work with temporal data
* Improved usability by
* cleaning up neuron model parameters and
* inferring initial neuron state
* inferring device parameter

0.0.3

This release includes biologically plausible neuron parameters, performance testing (including comparison to like-minded frameworks), and a new shiny logo!

0.0.2

This release includes a number of stability changes and documentation additions. Most significantly, some inconsistencies in neuron model flags have been fixed, and the MNIST, CIFAR, and cartpole tasks, have been tested on both CPU and GPU backends.

Page 2 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.