PyPi: Adversarial-Robustness-Toolbox

CVE-2021-23437

Transitive

Safety vulnerability ID: 41784

This vulnerability was reviewed by experts

The information on this page was manually curated by our Cybersecurity Intelligence Team.

Created at Sep 03, 2021 Updated at Mar 23, 2024
Scan your Python projects for vulnerabilities →

Advisory

Adversarial-robustness-toolbox version 1.8.0 updates its dependency "Pillow" to a secure version.

Affected package

adversarial-robustness-toolbox

Latest version: 1.17.1

Toolbox for adversarial machine learning.

Affected versions

Fixed versions

Vulnerability changelog

This release of ART v1.7.0 introduces many new evasion and inference attacks providing support for the evaluation of malware or tabular data classification, new query-efficient black-box (GeoDA) and strong white-box (Feature Adversaries) evaluation methods. Furthermore, this release introduces an easy to use estimator for Espresso ASR models to facilitate ASR research and connect Espresso and ART. This release also introduces support for binary classification with single outputs in neural networks classifiers and selected attacks. Many more new features and details can be found below:

Added

- Added LowProFool evasion attack for imperceptible attacks on tabular data classification in `art.attacks.evasion.LowProFool`. (1063)
- Added Over-the-Air-Flickering attack in PyTorch for evasion on video classifiers in `art.attacks.evasion.OverTheAirFlickeringPyTorch`. (1077, 1102)
- Added API for speech recognition estimators compatible with Imperceptible ASR attack in PyTorch. (1052)
- Added Carlini&Wagner evasion attack with perturbations in L0-norm in `art.attacks.evasion.CarliniL0Method`. (844, 1109)
- Added support for Deep Speech v3 in `PyTorchDeepSpeech` estimator. (1107)
- Added support for TensorBoard collecting evolution of norms (L1, L2, and Linf) of loss gradients per batch, adversarial patch, and total loss and its model-specific components where available (e.g. PyTochFasterRCNN) in `AdversarialPatchPyTorch`, `AdversarialPatchTensorFlow`, `FastGradientMethod`, and all `ProjectedGradientDescent*` attacks. (1071)
- Added `MalwareGDTensorFlow` attack for evasion on malware classification of portable executables supporting append based, section insertion, slack manipulation, and DOS header attacks. (1015)
- Added Geometric Decision-based Attack (GeoDA) in `art.attacks.evasion.GeoDA` for query-efficient black-box attacks on decision labels using DCT noise. (1001)
- Added Feature Adversaries framework-specific in PyTorch and TensorFlow v2 as efficient white-box attack generating adversarial examples imitating intermediate representations at multiple layers in `art.attacks.evasion.FeatureAdversaries*`. (1128, 1142, 1156)
- Added attribute inference attack based on membership inference in `art.attacks.inference.AttributeInferenceMembership`. (1132)
- Added support for binary classification with neural networks with a single output neuron in `FastGradientMethod`, and all `ProjectedGradientDescent*` attacks. Neural network binary classifiers with a single output require setting `nb_classes=2` and labels `y` in shape (nb_samples, 1) or (nb_samples,) containing 0 or 1. Backward compatibility for binary classifiers with two outputs is guaranteed with `nb_classes=2` and labels `y` one-hot-encoded in shape (nb_samples, 2). (1118)
- Added estimator for Espresso ASR models in `art.estimators.speech_recognition.PyTorchEspresso` with support for attacks with `FastGradientMethod`, `ProjectedGradientDescent` and `ImperceptibleASRPyTorch`. (1036)
- Added deprecation warnings for `art.classifiers` and `art.wrappers` to be replace with `art.estimators`. (1154)

Changed

- Changed `art.utils.load_iris` to use Iris dataset from `sklearn.datasets` instead of `archive.ics.uci.edu`. (1097 )
- Changed `HopSkipJump` to check for NaN in the adversarial example candidates and return original (benign) sample if at least one NaN is detected. (1124)
- Changed `SquareAttack` to accept user-defined loss and adversarial criterium definitions to enable black-box attacks on all machine learning tasks on images beyond classification. (1127)
- Changed `PyTorchFasterRCNN.loss_gradients` to process each sample separately to avoid issues with gradient propagation with `torch>=1.7`. (1138)

Removed

[None]

Fixed

- Fixed workaround in `art.defences.preprocessor.Mp3Compression` related to a bug in earlier versions of `pydub`. (419)
- Fixed bug in Pixel Attack and Threshold Attack for images with pixels in range [0, 1]. (990)

Resources

Use this package?

Scan your Python project for dependency vulnerabilities in two minutes

Scan your application

Severity Details

CVSS Base Score

HIGH 7.5

CVSS v3 Details

HIGH 7.5
Attack Vector (AV)
NETWORK
Attack Complexity (AC)
LOW
Privileges Required (PR)
NONE
User Interaction (UI)
NONE
Scope (S)
UNCHANGED
Confidentiality Impact (C)
NONE
Integrity Impact (I)
NONE
Availability Availability (A)
HIGH

CVSS v2 Details

MEDIUM 5.0
Access Vector (AV)
NETWORK
Access Complexity (AC)
LOW
Authentication (Au)
NONE
Confidentiality Impact (C)
NONE
Integrity Impact (I)
NONE
Availability Impact (A)
PARTIAL