Adversarial-robustness-toolbox

Latest version: v1.17.1

Safety actively analyzes 629564 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 8 of 10

1.1.1

Not secure
This release of ART v1.1.1 fixes two bugs in `TensorFlowV2Classifier` and `KerasClassifier`.

Added

[None]

Changed

[None]

Removed

[None]

Fixed

- Fixed a bug in `TensorFlowV2Classifier` resulting in incorrect loss calculation for loss_gradients except for `tensorflow.keras.losses.SparseCategoricalCrossentropy`. (279)
- Fixed a bug in `KerasClassifier` that allowed predicting the model with wrong input data shapes without raising any exceptions. We have now added checks for input data shape or are using the model's predict method where possible. This bug did not affect any classifier evaluated with the correct input data shape expected by the model. (283)

1.1.0

Not secure
This release of ART v1.1.0 introduces a new class of attacks and defences for model extraction threats in addition to the existing attacks and defences for evasion and poisoning, enables top level package import of ART, and includes a Kubeflow component demonstrating an example application of ART for robustness evaluation of machine learning models.

Added

- Added separate base classes for evasion, extraction, and poisoning attacks (250)
- Added the Functionally Equivalent Extraction attack for neural networks with two dense layers and ReLU activation (231)
- Added the Copycat CNN extraction attack (232)
- Added defences against model extraction attacks including output modification with reverse sigmoid, random noise, class labels, and high confidence (234)
- Added support for top level package import to enable `import art` (240)
- Added references to current limitations of defences (228)
- Added version to the ART package (239)
- Added a Kubeflow component using ART to run a robustness evaluation of PyTorch models with FGSM. This is a simple example and does not intend to represent a comprehensive robustness evaluation. (206)
- Added class gradients to `art.classifiers.ScikitlearnSVC` to enable targeted white-box attacks on SVM (215)
- Added checks to all classifiers raising an exception if the input data is of format `np.uint8`, `np.uint16`, `np.uint32`, or `np.uint64` to avoid unexpected outcomes during input preprocessing (226)
- Added support for Keras 2.3 and later with TensorFlow v2 as backend (200)

Changed

- Changed the Fast Gradient Sign Method attack minimal perturbation implementation to prevent it from modifying the original input data (213)
- Changed the reporting of attack success rates to always report percentages across all attacks (202)
- Changed and improved the detection of the loss function in `KerasClassifier` (212)

Removed

[None]

Fixed

- Fixed a bug in the logging configuration (190)
- Fixed a bug in the HCLU attack by replacing the hard-coded confidence parameter (228)
- Fixed a bug in `TensorFlowV2Classifier` by adding the missing attribute `_input_shape` (249)

1.0.1

Not secure
This release of ART 1.0.1 accounts for initial user feedback on v1.0.0

Added

- add support for binary logistic regression with `sklearn.linear_model.LogisticRegression` in addition to the existing support for multi-class logistic regression (171)

Changed

- extended exception messages inside of attacks checking for valid combinations of attacks and classifiers to provide better explanations of the reason for the raised exception (174)

- update Travis unit-testing to use TensorFlow 2.0.0 (183)

Removed

[None]

Fixed

- Fixed an issue in `art.attacks.PoisoningAttackSVM`where sometimes a certain class label wouldn't create unique poison points (168)

- Fixed typos in README (170, 184)

1.0.0

Not secure
This is the first major release of the Adversarial Robustness 360 Toolbox (ART v1.0)!

This release generalises ART to support all possible classifier models, in addition to its existing support for neural networks. Furthermore, it generalises the label format, to accept index labels as well as one-hot encoded labels, and the input shape, to accept, for example, tabular data as input features. This release also adds new model-specific white-box and poisoning attacks and provides new methods to certify and verify the adversarial robustness of neural networks and decision tree ensembles.

Added

- Add support for all classifiers and pipelines of scikit-learn including but not limited to `LogisticRegression`, `SVC`, `LinearSVC`, `DecisionTreeClassifier`, `AdaBoostClassifier`, `BaggingClassifier`, `ExtraTreesClassifier`, `GradientBoostingClassifier`, `RandomForestClassifier`, and `Pipeline`. (47)

- Add support for gradient boosted tree classifier models of `XGBoost`, `LightGBM` and `CatBoost`.

- Add support for TensorFlow v2 (rc0) by introducing a new classifier `TensorFlowV2Classifier` providing support for eager execution and accepting callable models. `KerasClassifier` has been extended to provide support for TensorFlow v2 `tensorflow.keras` Models without eager execution. (66)

- Add support for models of the Gaussian Process framework GPy. (116)

- Add the High-Confidence-Low-Uncertainty (HCLU) adversarial example formulation as an attack on Gaussian Processes. (116)

- Add the Decision Tree attack as a white-box attack for decision tree classifiers (115)

- Add support for white-box attacks on scikit-learn’s `LogisticRegression`, `SVC`, `LinerSVC`, and `DecisionTreeClassifier`, as well as `GPy` and black-box attacks on all scikit-learn classifiers and XGBoost, LightGBM and CatBoost models.

- Add Randomized Smoothing as wrapper class for neural network classifiers to provide certified adversarial robustness under the L2 norm. (114)

- Add the Clique Method Robustness Verification method for decision-tree-ensemble classifiers and extend it for models of XGBoost, LightGBM, and scikit-learn's `ExtraTreesClassifier`, `GradientBoostingClassifier`, `RandomForestClassifier`. (124)

- Add `BlackBoxClassifier` expecting only a single Python function as interface to the classifier predictions. This is the most general and versatile classifier of ART. New tutorial notebooks demonstrate `BlackBoxClassifier` testing the adversarial robustness of remote, deployed classifier models and of the Optical Character Recognition (OCR) engine Tesseract. (123, 152)

- Add the Poisoning Attack for Support Vector Machines with linear, polynomial or radial basis function kernels. (155)

Changed

- Introduce a new flexible API for all classifiers with an abstract base class for basic classifiers (minimal functionality to support black-box attacks), and mixins for neural networks, gradient-providing classifiers (to support white-box attacks), and decision-tree-based classifiers.

- Update, extend and introduce new get started examples and notebook tutorials for all supported frameworks. (47, 140)

- Extend label format to accept index labels in addition to the already supported one-hot-encoded labels. Internally ART continues to treat labels as one-hot-encoded. This feature allows users of ART to use the label format preferred by their machine learning framework and datasets. (126)

- Change the order of the preprocessing steps of applying defences and standardisation/normalisation in classifiers. So far the classifiers first applied standardisation followed by defences. With this release the defences will be applied first followed by standardisation to enable comparable defence parameters across classifiers with different standardisation/normalisation parameters. (84)

- Use the `batch_size` of an attack as argument to the method `predict` of its classifier to reduce out-of-memory errors for large models. (105 )

- Generalize the classifiers of TensorFlow, Keras, PyTorch, and MXNet by removing assumptions on their output (logits or probabilities). The Boolean parameter `logits` has been removed from Classifier API in methods `predict` and `class_gradient`. The predictions and gradients are now computed at the output of the model without any modifications. (50, 75, 106, 150)

- Rename `TFClassifier` to `TensorFlowClassifier` and keep `TFClassifier` for backward compatibility.

Removed

- Sunset support for Python 2 in preparation for its retirement on Jan 1, 2020. We have stopped running unittests with Python 2 and do not require new contributions to run with Python 2. We keep existing compatibility code for Python 2 and 3 where possible. (83)

Fixed

- Improve `VirtualAdversarialMethod` by making the computation of the L2 data normalisation more reliable and raising an exception if it is used with a model providing logits as output. Currently, `VirtualAdversarialMethod` is expecting probabilities as output. (120, 157)

0.10.0

Not secure
This release contains contains new black-box attacks, detectors, updated attacks and several bug fixes.

Added
* Added HopSkipJump attack, a powerful new black-box attack (80)
* Added new example script demonstrating the perturbation of a neural network layer between input and output (92)
* Added a notebook demonstrating `BoundaryAttack`
* Added a detector based on Fast Generalized Subset Scanning (100)

Changed
* Changed Basic Iterative Method (BIM) attack to be a special case of Projected Gradient Descent attack with `norm=np.inf` and without random initialisation (90)
* Reduced calls to method predict in attacks `FastGradientMethod` and `BasicIterativeMethod` to improve performance (70)
* Updated pretrained models in notebooks with on-demand downloads of the pretrained models (63, 88)
* Added batch processing to `AdversarialPatch` attack (96)
* Increased Tensorflow versions in unit testing on Travis CI to 1.12.3, 1.13.1, and 1.14.0 (94)
* Attacks are now accepting the argument `batch_size` which is used in calls to `classifier.predict` within the attack replacing the default batch_size=128 of `classifier.predict` (105)
* Change order of preprocessing defences and standardisation in classifiers, now defences are applied on the provided input data and standardisation (preprocessing argument of classifier) is applied after the defences (84
* Update all defences to account for clip_values (84)

Removed
* Removed pretrained models in directory `models` used in notebooks and replaced with ondemand downloads (63, 88)
* Removed argument `patch_shape` from attack `AdversarialPatch` (77)
* Stopped unit testing for Python 2 on Travis CI (83)

Fixed
* Fixed all Pylint and LGTM alerts and warnings (110)
* Fixed broken links in notebooks (63, 88)
* Fixed broken links to imagenet data in notebook `attack_defense_imagenet` (109)
* Fixed calculation of attack budget `eps` by accounting for initial benign sample in projection to eps-ball for random initialisation in `FastGradientMethod` and `BasicIterativeMethod` (85)

0.9.0

Not secure
This release contains breaking changes to attacks and defences with regards to setting attributes, removes restrictions on input shapes which enables the use of feature vectors and several bug fixes.


Added

- implement pickle for classifiers `tensorflow` and `pytorch` (39)
- added example `data_augmentation.py` demonstrating the use of data generators

Changed

- renamed and moved tests (58)
- change input shape restrictions, classifiers accept now any input shape, for example feature vectors; attacks requiring spatial inputs are raising exceptions (49)
- clipping of data ranges becomes optional in classifiers which allows attacks to accept unbounded data ranges (49)
- [Breaking changes] class attributes in attacks can no longer be changed with method `generate`, changing attributes is only possible with methods `__init__` and `set_params`
- [Breaking changes] class attributes in defenses can no longer be changed with method `generate`, changing attributes is only possible with methods `__call__` and `set_params`
- resolved inconsistency in PGD random_init with Madry's version

Removed

- deprecated static adversarial trainer `StaticAdversarialTrainer`


Fixed

- Fixed bug in attack ZOO (60)

Page 8 of 10

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.