Adversarial-robustness-toolbox

Latest version: v1.17.1

Safety actively analyzes 629599 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 10 of 10

0.2.0

Not secure
This release makes ART framework-independent. The following backends are now supported: TensorFlow, Keras and PyTorch.

Added
- New framework-independent `Classifier` interface
- Backend support for TensorFlow, Keras and PyTorch
- Basic interface for detecting adversarial samples (no concrete method implemented for now)
- Gaussian augmentation

Changed
- All attacks now fit the new `Classifier` interface

Fixed
- `to_categorical` utility function for unsqueezed labels
- Norms in CLEVER score
- Source code folder name to correct PyPI install

Removed
- hard-coded architectures for datasets / model types: CNN, ResNet, MLP

0.1

Not secure
This is the initial release of ART. The following features are currently supported:
- `Classifier` interface, supporting a few predefined architectures (CNN, ResNet, MLP) for standard datasets (MNIST, CIFAR10), as well as custom models from users
- `Attack` interface, supporting a few evasion attacks
- FGM & FSGM
- Jacobian saliency map attack
- Carlini & Wagner L_2 attack
- DeepFool
- NewtonFool
- Virtual adversarial method (to be used for virtual adversarial training)
- Universal perturbation
- Defences
- Preprocessing interface, currently implemented by feature squeezing, label smoothing, spatial smoothing
- Adversarial training
- Metrics for measuring robustness: empirical robustness (minimal perturbation), loss sensitivity and CLEVER score
- Utilities for loading datasets, some preprocessing, common maths manipulations
- Scripts for launching some basic pipelines for training, tests and attacking
- Unit tests

Page 10 of 10

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.