Adversarial-robustness-toolbox

Latest version: v1.17.1

Safety actively analyzes 629639 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 9 of 10

0.8.0

Not secure
This release includes **new evasion attacks**, like ZOO, boundary attack and the adversarial patch, as well as the capacity to break non-differentiable defences.

Added
* ZOO black-box attack (class `ZooAttack`)
* Decision boundary black-box attack (class `BoundaryAttack`)
* Adversarial patch (class `AdversarialPatch`)
* Function to estimate gradients in `Preprocessor` API, along with its implementation for all concrete instances.
This allows to break non-differentiable defences.
* Attributes `apply_fit` and `apply_predict` in `Preprocessor` API that indicate if a defence should be used at training and/or test time
* Classifiers are now capable of running a full backward pass through defences
* `save` function for TensorFlow models
* New notebook with usage example for the adversarial patch
* New notebook showing how to synthesize an adversarially robust architecture (see ICLR SafeML Workshop 2019: **Evolutionary Search for Adversarially Robust Neural Network** by M. Sinn, M. Wistuba, B. Buesser, M.-I. Nicolae, M.N. Tran)

Changed
* [Breaking change] Defences in classifiers are now to be specified as `Preprocessor` instances instead of strings
* [Breaking change] Parameter `random_init` in `FastGradientMethod`, `ProjectedGradientDescent` and `BasicIterativeMethod` has been renamed to `num_random_init` and allows now to specify the number of random initialization to run before choosing the best attack
* Possibility to specify batch size when calling `get_activations` from `Classifier` API

0.7.0

Not secure
This release contains a **new poison removal method**, as well as some restructuring of features recently added to the library.

Added
- Poisoning fixing method performing retraining as part of the `ActivationDefence` class
- Example script of how to use the poison removal method
- New module `wrappers` containing features that alter the behaviour of a `Classifier`. These are to be used as wrappers for classifiers and to be passed directly to evasion attack instances.

Changed
- `ExpectationOverTransformations` has been moved to the `wrappers` module
- `QueryEfficientBBGradientEstimation` has been moved to the `wrappers` module

Removed
- Attacks no longer take an `expectation` parameter (breaking). This has been replaced by a direct call to the attack with an `ExpectationOverTransformation` instance.

Fixed
- Bug in spatial transformations attack: when attack does not succeed, original samples are returned now (issue 40, fixed in 42, 43)
- Bug in Keras with loss functions that do not take labels in one-hot encoding (issue 41)
- Bug fix in activation defence against poisoning: incorrect test condition
- Bug fix in DeepFool: inverted stop condition when working with batches
- Import problem in `utils.py`: top level imports were forcing users to install all supported ML frameworks

0.6.0

Not secure
Added
- PixelDefend defense
- Query-efficient black-box gradient estimates (NES)
- A general wrapper for classifiers allowing to change their behaviour (see `art/classifiers/wrapper.py`)
- 3D plot in visualization
- Saver for `PyTorchClassifier`
- Pickling for `KerasClassifier`
- Representation for all classifiers

Changed
- We now use pretrained models for unit tests (see `art/utils.py`, functions `get_classifier_pt`, `get_classifier_kr`, `get_classifier_tf`)
- Keras models now accept any loss function

Removed
- `Detector` abstract class. Detectors now directly extend `Classifier`

Thanking also our external contributors!
AkashGanesan

0.5.0

Not secure
This release of ART adds two new evasion attacks, provides some bug fixes, as well as some new features, like access to the learning phase (training/test) through the `Classifier` API, batching in evasion attacks and expectation over transformations.

Added
- Spatial transformations evasion attack (class `art.attacks.SpatialTransformations`)
- Elastic net (EAD) evasion attack (class `art.attacks.ElasticNet`)
- Data generator support for multiple types of TensorFlow iterators
- New function and property to the Classifier API that allow to explicitly control the learning phase (train/test)
- Reports for poisoning module
- Most evasion attacks now support batching, this is specified by the new parameter `batch_size`
- `ExpectationOverTransformations` class, to be used with evasion attacks
- Parameter `expectation` of evasion attacks allows to specify the use of expectation over transformations

Changed
- Update list of attacks supported by universarl perturbation
- PyLint and Travis configs

Fixed
- Indexing error in C&W L_2 attack (issue 29)
- Universal perturbation stop condition: attack was always stopping after one iteration
- Error with data subsampling in `AdversarialTrainer` when the ratio of adversarial samples is 1

0.4.0

Not secure
Added
- Class `art.classifiers.EnsembleClassifier`: support for ensembles under `Classifier` interface
- Module `art.data_generators`: data feeders for dynamic loading and augmentation for all frameworks
- New function `fit_generator` to classifiers and adversarial trainer
- C&W L_inf attack
- Class `art.defences.JpegCompression`: JPEG compression as preprocessing defence
- Class `art.defences.ThermometerEncoding`: thermometer encoding as preprocessing defence
- Class `art.defences.TotalVarMin`: total variance minimization as preprocessing defence
- Function `art.utils.master_seed`: setting master seed for random number generators
- `pylint` for Travis

Changed
- Restructure analyzers from poisoning module

Fixed
- PyTorch classifier support on GPU

0.3.0

Not secure
This release brings many new features to ART, including a poisoning module, an adversarial sample detection module and support for MXNet models.

Added
- Access to layers and model activations through the `Classifier` API
- MXNet support
- Poison detection module, containing the poisoning detection method based on clustering activations
- Jupyter notebook with poisoning attack and detection example on MNIST
- Adversarial samples detection module, containing two detectors: one working based on inputs and one based on activations

Changed
- Optimized JSMA attack (`art.attacks.SaliencyMapMethod`) - can now run on ImageNet data
- Optimized C&W attack (`art.attacks.CarliniL2Method`)
- Improved adversarial trainer, now covering a wide range of setups

Removed
- Hard-coded `config` folder. Config now gets created on the fly when running ART for the first time. Produced config gets stored in home folder `~/.art`

Page 9 of 10

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.