Changelogs » Foolbox

Foolbox

3.0.2

Fixes a bug in the `BrendelBethgeAttack` (thanks AidanKelley)

3.0.1

Bug fixes
* type annotations are now correctly exposed using `py.typed` (file was missing in MANIFEST)
* TransformBoundsWrapper now correctly handles `data_format` (thanks zimmerrol)

3.0.0

New Features

Foolbox 3 aka Foolbox Native has been rewritten from scratch with performance in mind. All code is running natively in PyTorch, TensorFlow and JAX, and all attacks have been rewritten with real batch support.

3.0.0b1

New Features

* added `foolbox.gradient_estimators`
* improved attack hyperparameter documentation

3.0.0b0

Foolbox 3 aka Foolbox Native has been rewritten from scratch with performance in mind. All code is running natively in PyTorch, TensorFlow and JAX, and all attacks have been rewritten with real batch support.

Warning: This is a pre-release beta version. Expect breaking changes.

2.4.0

New Features

* fixed PyTorch model gradients (fixes DeepFool with batch size > 1)
* added support for TensorFlow 2.0 and newer (Graph and Eager mode)
* refactored the tests
* support for the latest `randomgen` version

2.3.0

New Features
* new `EnsembleAveragedModel` (thanks to zimmerrol)
* new `foolbox.utils.flatten`
* new `foolbox.utils.atleast_kd`
* new `foolbox.utils.accuracy`
* `PyTorchModel` now always warns if model is in train mode, not just once
* batch support for `ModelWithEstimatedGradients`

Bug fixes
* fixed dtype when using Adam PGD with a PyTorch model
* fixed CW attack hyperparameters

2.2.0

New Features
* support for Foolbox extensions using the `foolbox.ext` namespace

2.1.0

New Features
* New `foolbox.models.JAXModel` class to support JAX models (https://github.com/google/jax)
* The `preprocessing` argument of models now supports a `flip_axis` key to support common preprocessing operations like RGB to BGR in a nice way. This builds on the ability to pass dicts to `preprocessing` introduced in Foolbox 2.0.

Bug fixes and improvements
* Fixed a serious bug in the `LocalSearchAttack` (thanks to duoergun0729)
* `foolbox.utils.samples` now warns if samples are repeated
* `foolbox.utils.sampels` now uses PNGs instead of JPGs (except for ImageNet)
* Other bug fixes
* Improved docstrings
* Improved docs

2.0.0

* **batch support**: check out the new example in the [README](https://github.com/bethgelab/foolbox)
* model and defense zoo: https://foolbox.readthedocs.io/en/latest/user/zoo.html
* attacks take an optional `threshold` argument to stop attacks once that threshold is reached

`foolbox.attacks` now refers to the attacks with batch support. The old attacks can still be accessed under `foolbox.v1.attacks`. Batch support has been added to almost all attacks and new attacks will only be implemented with batch support. If you need batch support for an old attack that has not yet been adapted, please open an issue.

2.0.0rc0


      

2.0.0b0

Batch-support is finally here!

See 316 for details until we have updated the documentation. Right now it's still limited to a few attacks, but feel free to open an issue for any attack that you need. It's easy to extend to new attacks, we just haven't done it yet and will prioritize based on requests.

1.8.0

Foolbox Model Zoo

Foolbox now has an easy way to load models or defenses from Git repos: https://foolbox.readthedocs.io/en/latest/user/zoo.html

1.7.0

New Features

* Foolbox now has support for the Spatial Attack (https://arxiv.org/abs/1712.02779)

Bug Fixes

* Foolbox now uses its own random number generators to be independent of seeds set inside models.

1.6.2

added missing `backward()` support to the `CompositeModel` model wrapper

1.6.1

The `foolbox.models.TensorFlowModel.from_keras` constructor now automatically uses the session used by `tf.keras` instead of TensorFlow's default session.

1.6.0

New features

* support for **TensorFlow Eager**: [TensorFlowEagerModel](https://foolbox.readthedocs.io/en/latest/modules/models.htmlfoolbox.models.TensorFlowEagerModel)
* improved support for **`tensorflow.keras`** models: [TensorFlowModel.from_keras(...)](https://foolbox.readthedocs.io/en/latest/modules/models.htmlfoolbox.models.TensorFlowModel.from_keras)
* Foolbox-native implementation of the **Carlini Wagner L2 attack**: [CarliniWagnerL2Attack](https://foolbox.readthedocs.io/en/latest/modules/attacks/gradient.htmlfoolbox.attacks.CarliniWagnerL2Attack)

1.5.0

New features

* all Foolbox attacks now support early stopping when reaching a certain perturbation size
* just pass a `threshold` to the attack or `Adversarial` instance during initialization
* the distance metric can now be passed to the attack during initialization (no need to manually create a `Adversarial` instance anymore)

1.4.0

* The Adversarial class now remembers the model output for the best adversarial so far. For deterministic models this is the same as `fmodel.predictions(adversarial.image)`, but it can be useful for non-deterministic models. Note that very close to the decision boundary even otherwise deterministic models can become stochastic because of non-deterministic floating point operations such as `reduce_sum`. In addtion to the new `output` attribute, there is also a new `adversarial_class` attribute for convience; it just takes the argmax of the output.
* new [ADefAttack](https://foolbox.readthedocs.io/en/latest/modules/attacks/gradient.htmlfoolbox.attacks.ADefAttack) thanks to EvgeniaAR
* new [NewtonFoolAttack](https://foolbox.readthedocs.io/en/latest/modules/attacks/gradient.htmlfoolbox.attacks.NewtonFoolAttack) thanks to bveliqi
* new FAQ section in the docs: https://foolbox.readthedocs.io/en/latest/user/faq.html

1.3.2

Fixed assertions that prevented custom preprocessing functions from changing the shape of the input (see 187).

1.3.1

New Features

* added the `EvoluationaryStrategiesGradientEstimator` as an alternative to the `CoordinateWiseGradientEstimator` introduced in 1.3.0 (thanks to lukas-schott)

1.3.0

Highlights
* added support for arbitrary preprocessing functions with custom gradients (e.g. input binarization with a straight-through approximation in the backward pass)
* added the `ModelWithEstimatedGradients` model wrapper to replace a model's gradients with gradients estimated by an arbitrary gradient estimator
* added the `CoordinateWiseGradientEstimator` and an easy template to implement custom gradient estimators
* added the `BinarizationRefinementAttack` that uses information about a model's input binarization to refine adversarials found by other attacks
* added the `ConfidentMisclassification` criterion

Other improvements
* added a `binarize` function in in utils to provide a consistent way to specify input binarization as part of the preprocessing
* added `batch_crossentropy` in utils
* added preprocessing support to LasagneModel
* renamed the `GradientLess` model wrapper to `ModelWithoutGradients`
* bug fixes
* improved documentation and fixed typos

1.2.0

Highlights
* [Basic Iterative Method](https://foolbox.readthedocs.io/en/latest/modules/attacks/gradient.htmlfoolbox.attacks.BasicIterativeMethod) reimplemented (Linfinity, L1, L2)
* recommended instead of IterativeGradientAttack and IterativeGradientSignAttack
* [Projected Gradient Descent](https://foolbox.readthedocs.io/en/latest/modules/attacks/gradient.htmlfoolbox.attacks.ProjectedGradientDescent) attack (with and without random start)
* [Momentum Iterative Method](https://foolbox.readthedocs.io/en/latest/modules/attacks/gradient.htmlfoolbox.attacks.MomentumIterativeMethod)
* full PyTorch 0.4.0 support (thanks cjsg)
* new MXNetGluonModel wrapper for [Gluon models](https://gluon.mxnet.io) (thanks meissnereric)

Other improvements
* official PyTorch example in the docs
* bug fixes
* updated tests to use newer versions of the different frameworks
* improved documentation and fixed typos

1.1.0

* added the PointwiseAttack (supersedes the ResetAttack)
* attacks now provide the full function signature of their `__call__` method as well as parameter documentation
* added additional checks for correctness of the returned adversarials even when attacks misbehave
* replaced the randomstate package with the randomgen package
* bug fixes and improvements

1.0.0

Improved the documentation and the availability of useful function signatures. Attack parameters are now be fully documented, like everything else, and this documentation is directly accessible within Jupyter / IPython and IDEs.

0.15.0

* fixed `CompositeModel` and added it to docs
* added `L0` and `Linfinity` (`Linf`) distance measures
* added `DeepFoolLinfinityAttack`
* renamed `DeepFoolAttack` to `DeepFoolL2Attack`
* new `DeepFoolAttack` now chooses norm to optimize based on the employed distance measure (alternative, `p=2` or `p=np.inf` can be passed)
* fixed integer overflows caused by numpy
* improved tests

0.14.0

Fixed a numeric issue when attacking Keras models that provide probability outputs (instead of logits) using a gradient-based attack.

0.13.0

Fixed package dependency issues.

0.12.4

- Improved GradientAttack and GradientSignAttack (FGSM) to handle smaller epsilons
- fixed Pillow imports
- improved README rendering on PyPI
- other minor changes

0.11.1

BoundaryAttack less verbose