Deel-lip

Latest version: v1.5.0

Safety actively analyzes 621041 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

1.5.0

New features and improvements

- Two new losses based on standard Keras cross-entropy losses with a settable temperature for softmax:
- `TauSparseCategoricalCrossentropy` equivalent to Keras `SparseCategoricalCrossentropy`
- `TauBinaryCrossentropy` equivalent to Keras `BinaryCrossentropy`
- New module `deel.lip.compute_layer_sv` to compute the largest and lowest singular values of layers `compute_layer_sv()` or of a whole model `compute_model_sv()`.
- Power iteration algorithm for convolution.
- New "Getting Started" tutorial to introduce 1-Lipschitz neural networks.
- Documentation migration from Sphinx to MkDocs.

API changes

- Activations are now imported via `deel.lip.layers` submodule, e.g. `deel.lip.layers.GroupSort` instead of `deel.lip.activations.GroupSort`. We adopted the same convention as Keras. The legacy submodule is still available for retro-compatibility but will be removed in a future release.
- Unconstrained layers must now be imported using `deel.lip.layers.unconstrained` submodule, e.g. `deel.lip.layers.unconstrained.PadConv2D`.

Fixes

- Fix InvertibleUpSampling `__call__()` returning None.

**Full changelog**: https://github.com/deel-ai/deel-lip/compare/v1.4.0...v1.5.0

1.4.0

New features and improvements

- Two new layers:
- `SpectralConv2DTranspose`, a Lipschitz version of the Keras `Conv2DTranspose` layer
- activation layer `Householder` which is a parametrized generalization of the `GroupSort2`
- Two new regularizers to foster orthogonality:
- `LorthRegularizer` for an orthogonal convolution
- `OrthDenseRegularizer` for an orthogonal `Dense` matrix kernel
- Two new losses for Lipschitz networks:
- `TauCategoricalCrossentropy`, a categorical cross-entropy loss with temperature scaling `tau`
- `CategoricalHinge`, a hinge loss for multi-class problems based on the implementation of the Keras `CategoricalHinge`
- Two new custom callbacks:
- `LossParamScheduler` to change loss hyper-parameters during training, e.g. `min_margin`, `alpha` and `tau`
- `LossParamLog` to log the value of loss parameters
- The Björck orthogonalization algorithm was accelerated.
- Normalizers (power iteration and Björck) use `tf.while_loop` and the `swap_memory` argument can be globally set using `set_swap_memory(bool)`. Default value is `True` to save memory usage in GPU.
- The new function `set_stop_grad_spectral(bool)` allows to bypass the back-propagation in the power iteration algorithm that computes the spectral norm. Default value is `True`. Stopping gradient propagation reduces runtime.
- Due to bugs in TensorFlow serialization of custom losses and metrics (version 2.0 and 2.1), deel-lip now only supports TensorFlow >= 2.2.

Fixes

- `SpectralInitializer` does not reuse anymore the same base initializer in multiple instances.

**Full Changelog**: https://github.com/deel-ai/deel-lip/compare/v1.3.0...v1.4.0

1.3.0

New features and improvements

- New layer `PadConv2D` to handle in particular circular padding in convolutional layer
- Losses handle multi-label classification
- Losses are now element-wise. `reduction` parameter in custom losses can be set to None.
- New metrics are introduced: `ProvableAvgRobustness` and `ProvableRobustAccuracy`

API changes

- `KR` is not a function anymore but a class derived from `tf.keras.losses.Loss`.
- `negative_KR` function was removed. Use the loss `HKR(alpha=0)` instead.
- The stopping criterion for Spectral normalization and Björck orthogonalization (iterative methods) is no more the number of iterations `niter_spectral` and `niter_bjorck`. The methods are now stopped based on the difference between two iterations: `eps_spectral` and `eps_bjorck`. This API change occurs in:
- Lipschitz layers, such as `SpectralDense` and `SpectralConv2D`
- normalizer `reshaped_kernel_orthogonalization`
- constraint `SpectralConstraint`
- initializer `SpectralInitializer`


**Full Changelog**: https://github.com/deel-ai/deel-lip/compare/v1.2.0...v1.3.0

1.2.0

this revision contains:
- code refactoring: storing wbar in a tf.variable
- update of the documentation's notebooks
- update of the Callbacks, Initializers, Constraints...
- update of the losses and tests for losses
- improved loss stability for small batches
- added `ScaledGlobalL2NormPooling2D`
- new way to export keras serializable objects

This ends the support of tf2.0. Only versions >= tf2.1 are supported.

1.1.1

This revision contains:
- bugfixes in `losses.py`: fixed a problem with data types in `HKR_loss` and fixed a weighting problem in `KR_multiclass_loss`.
- changed behavior of `FrobeniusDense` in the multi class setup : now using `FrobeniusDense` with 10 output neurons is now equivalent to stack 10 `FrobeniuDense` layers with 1 output neuron. The L2normalization is performed on each neuron instead of the full weight matrix

1.1.0

This version add new features:

- `InvertibleDownSampling` and `InvertibleUpSampling`
- multiclass extension of the HKR loss

It also contains the multiple fixes for:

- bug with `L2NormPooling`
- bug with `vanilla_export`
- bug with `tf.function` annotation causing incorrect Lipschitz constant in `Sequential` (for constant others than 1).

Breaking changes:

- the `true_values` parameter has been removed in binary HKR as both (1, -1) and (1,0) are handled automatically.

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.