Textattack

Latest version: v0.3.10

Safety actively analyzes 630169 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 3

0.3.3

Not secure
1. Merge pull request 508 from QData/example_bug_fix

2. Merge pull request 505 from QData/s3-model-fix

3. Merge pull request 503 from QData/multilingual-doc

4. Merge pull request 502 from QData/Notebook-10-bug-fix

5. Merge pull request 500 from QData/docstring-rework-missing

6. Merge pull request 497 from QData/dependabot/pip/docs/tensorflow-2.4.2

7. Merge pull request 495 from QData/readthedoc-fix

0.3.2

Not secure
Multiple bug fixes:

- Merge pull request 473 from cogeid/file-redirection-fix

- Merge pull request 469 from xinzhel/allennlp_doc

- Merge pull request 477 from cogeid/Fix-RandomSwap-and-RandomSynonymI…

- Merge pull request 484 from QData/update-torch-version

- Merge pull request 490 from QData/scipy-version-plus-two-doc-updates

- Merge pull request 420 from QData/multilingual

- Merge pull request 495 from QData/readthedoc-fix

0.3.0

Not secure
New Updated API
We have added two new classes called `Attacker` and `Trainer` that can be used to perform adversarial attacks and adversarial training with full logging support and multi-GPU parallelism. This is intended to provide an alternative way of performing attacks and training for custom models and datasets.

`Attacker`: Running Adversarial Attacks
Below is an example use of `Attacker` to attack BERT model finetuned on IMDB dataset using TextFooler method. `AttackArgs` class is used to set the parameters of the attacks, including the number of examples to attack, CSV file to log the results, and the interval at which to save checkpoint.

![Screen Shot 2021-06-24 at 8 34 44 PM](https://user-images.githubusercontent.com/32072203/123256196-a85e6280-d52b-11eb-94fd-a4f0408a851a.png)

More details about `Attacker` and `AttackArgs` can be found [here](https://textattack.readthedocs.io/en/latest/api/attack.html).

`Trainer`: Running Adversarial Training
Previously, TextAttack supported adversarial training in a limited manner. Users could only train models using the CLI command, and not every aspects of training was available for tuning.

`Trainer` class introduces an easy way to train custom PyTorch/Transformers models on a custom dataset. Below is an example where we finetune BERT on IMDB dataset with an adversarial attack called [DeepWordBug](https://arxiv.org/abs/1801.04354).

![Screen Shot 2021-06-25 at 9 28 57 PM](https://user-images.githubusercontent.com/32072203/123425045-93053900-d5fc-11eb-81ae-2e11f2b15137.png)

`Dataset`
Previously, datasets passed to TextAttack were simply expected to be an iterable of `(input, target)` tuples. While this offers flexibility, it prevents users from passing key information about the dataset that TextAttack can use to provide better experience (e.g. label names, label remapping, input column names used for printing).

We instead explicitly define `Dataset` class that users can use or subclass for their own datasets.

Bug Fixes:
- 467: Don't check self.target_max_score when it is already known to be None.
- 417: Fixed bug where in masked_lm transformations only subwords were candidates for top_words.

0.2.15

Not secure
CLARE Attack (356, 392)
We have added a new attack proposed by "[Contextualized Perturbation for Textual Adversarial Attack](https://arxiv.org/abs/2009.07502)" (Li et al., 2020). There's also a corresponding augmenter recipe using CLARE. Thanks to Hanyu-Liu-123, cookielee77.

Custom Word Embedding (333, 399)
We have added support for custom word embedding via `AbstractWordEmbedding`, `WordEmbedding`, `GensimWordEmbedding` from`textattack.shared`. These three classes allow users to use their own custom word embeddings for transformations and constraints that require custom word embeddings. Thanks tsinggggg and alexander-zap for contributing!

Bug Fixes and Changes
- We fixed a bug that caused TextAttack to report fewer number of average queries than what it should be reporting (350, thanks a1noack).
- Update the dataset split used to evaluate robustness during adversarial training (361, thanks Opdoop).
- Updated default parameters for TextBugger recipe (373)
- Fixed an issue with TextBugger by updating the default method used to segment text into words to work with homoglyphs. (376, thanks lethaiq!)
- Updated `ModelWrapper` to not require `get_grad` method to be defined. (381)
- Fixed an issue with `WordSwapMaskedLM` that was causing words with lowest probability to be picked first. (396)

0.2.14

Not secure
Improvements

Bug fixing
Matching documentation in Readme.md and the files in /doc folder
add checklist
add multilingual USE
add gradient-based word importance ranking
update to a more complete API documentation
add cola constraint
add the lazy loader

0.2.12

Not secure
Big Improvements
- add checklist
- add multilingual USE
- add gradient-based word importance ranking
- update to a more complete API documentation
- add cola constraint
- add the lazy loader

Page 2 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.