Adapter-transformers

Latest version: v3.2.1.post0

Safety actively analyzes 613482 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

4.26.1

New
New model integrations
- Add BEiT integration (jannik-brinkmann via 428, 439)
- Add GPT-J integration (ChiragBSavani via 426)
- Add CLIP integration (calpt via 483)
- Add ALBERT integration (lenglaender via 488)
- Add BertGeneration (hSterz via 480)

Misc
- Add support for adapter configuration strings (calpt via 465, 486)
This enables you to easily configure adapter configs. To create a Pfeiffer adapter with reduction factor 16 you can know use `pfeiffer[reduction_factor=16]`. Especially for experiments using different hyperparameters or the example scripts, this can come in handy. [Learn more](https://docs.adapterhub.ml/overview.html#configuration-strings)
- Add for `Stack`, `Parallel` & `BatchSplit` composition to prefix tuning (calpt via 476)
In previous `adapter-transformers` versions, you could combine multiple bottleneck adapters. You could use them in parallel or stack them. Now, this is also possible for prefix-tuning adapters. Add multiple prefixes to the same model to combine the functionality of multiple adapters (Stack) or perform several tasks simultaneously (Parallel, BatchSplit) [Learn more](https://docs.adapterhub.ml/adapter_composition.html#stack)
- Enable parallel sequence generation with adapters (calpt via 436)

Changed
- Removal of the `MultiLingAdapterArguments` class. Use the [`AdapterArguments`](https://docs.adapterhub.ml/classes/adapter_training.html#transformers.adapters.training.setup_adapter_training) class and [`setup_adapter_training`](https://docs.adapterhub.ml/classes/adapter_training.html#transformers.adapters.training.setup_adapter_training) method instead. [Learn more](https://docs.adapterhub.ml/training.html).
- Upgrade of underlying transformers version to 4.26.1 (calpt via 455, hSterz via 503)

Fixed
- Fixes for GLUE & dependency parsing example script (calpt via 430, 454)
- Fix access to shared parameters of compacter (e.g. during sequence generation) (calpt via 440)
- Fix reference to adapter configs in `T5EncoderModel` (calpt via 437)
- Fix DeBERTa prefix tuning with enabled relative attention (calpt via 451)
- Fix gating for prefix tuning layers (calpt via 471)
- Fix input to T5 adapter layers (calpt via 479)
- Fix AdapterTrainer hyperparameter tuning (dtuit via 482)
- Move loading best adapter to AdapterTrainer class (MaBeHen via 487)
- Make HuggingFace Hub Mixin work with newer utilities (Helw150 via 473)
- Only compute fusion reg loss if fusion layer is trained (calpt via 505)

adapters3.1.0

4.21.3

New
New adapter methods
- Add LoRA implementation (calpt via 334, 399): **[Documentation](https://docs.adapterhub.ml/overview.html#lora)**
- Add (IA)^3 implementation (calpt via 396): **[Documentation](https://docs.adapterhub.ml/overview.html#ia-3)**
- Add UniPELT implementation (calpt via 407): **[Documentation](https://docs.adapterhub.ml/overview.html#unipelt)**

New model integrations
- Add `Deberta` and `DebertaV2` integration(hSterz via 340)
- Add Vision Transformer integration (calpt via 363)

Misc
- Add `adapter_summary()` method (calpt via 371): **[More info](https://adapterhub.ml/blog/2022/09/updates-in-adapter-transformers-v3-1/#adapter_summary-method)**
- Return AdapterFusion attentions using `output_adapter_fusion_attentions` argument (calpt via 417): **[Documentation](https://docs.adapterhub.ml/adapter_composition.html#retrieving-adapterfusion-attentions)**

Changed
- Upgrade of underlying transformers version (calpt via 344, 368, 404)

Fixed
- Infer label names for training for flex head models (calpt via 367)
- Ensure root dir exists when saving all adapters/heads/fusions (calpt via 375)
- Avoid attempting to set prediction head if non-existent (calpt via 377)
- Fix T5EncoderModel adapter integration (calpt via 376)
- Fix loading adapters together with full model (calpt via 378)
- Multi-gpu support for prefix-tuning (alexanderhanboli via 359)
- Fix issues with embedding training (calpt via 386)
- Fix initialization of added embeddings (calpt via 402)
- Fix model serialization using `torch.save()` & `torch.load()` (calpt via 406)

adapters3.0.1

4.17.0

New
Efficient Fine-Tuning Methods
- Add Prefix Tuning (calpt via 292)
- Add Parallel adapters & Mix-and-Match adapter (calpt via 292)
- Add Compacter (hSterz via 297)

Misc
- Introduce `XAdapterModel` classes as central & recommended model classes (calpt via 289)
- Introduce `ConfigUnion` class for flexible combination of adapter configs (calpt via 292)
- Add `AdapterSetup` context manager to replace `adapter_names` parameter (calpt via 257)
- Add `ForwardContext` to wrap model forward pass with adapters (calpt via 267, 295)
- Search all remote sources when passing `source=None` (new default) to `load_adapter()` (calpt via 309)

Changed
- Deprecate `XModelWithHeads` in favor of `XAdapterModel` (calpt via 289)
- Refactored adapter integration into model classes and model configs (calpt via 263, 304)
- Rename activation functions to match Transformers' names (hSterz via 298)
- Upgrade of underlying transformers version (calpt via 311)

Fixed
- Fix seq2seq generation with flexible heads classes (calpt via 275, hSterz via 285)
- `Parallel` composition for XLM-Roberta (calpt via 305)


adapters2.3.0

4.12.5

New
- Allow adding, loading & training of model embeddings (hSterz via 245). See https://docs.adapterhub.ml/embeddings.html.

Changed
- Unify built-in & custom head implementation (hSterz via 252)
- Upgrade of underlying transformers version (calpt via 255)

Fixed
- Fix documentation and consistency issues for AdapterFusion methods (calpt via 259)
- Fix serialization/ deserialization issues with custom adapter config classes (calpt via 253)


adapters2.2.0

4.11.3

New
Model support
- `T5` adapter implementation (AmirAktify & hSterz via 182)
- `EncoderDecoderModel` adapter implementation (calpt via 222)

Prediction heads
- `AutoModelWithHeads` prediction heads for language modeling (calpt via 210)
- `AutoModelWithHeads` prediction head & training example for dependency parsing (calpt via 208)

Training
- Add a new `AdapterTrainer` for training adapters (hSterz via 218, 241 )
- Enable training of `Parallel` block (hSterz via 226)

Misc
- Add get_adapter_info() method (calpt via 220)
- Add set_active argument to add & load adapter/fusion/head methods (calpt via 214)
- Minor improvements for adapter card creation for HF Hub upload (calpt via 225)

Changed
- Upgrade of underlying transformers version (calpt via 232, 234, 239 )
- Allow multiple AdapterFusion configs per model; remove `set_adapter_fusion_config()` (calpt via 216)

Fixed
- Incorrect referencing between adapter layer and layer norm for `DataParallel` (calpt via 228)


adapters2.1.0

4.8.2

New
Integration into HuggingFace's Model Hub
- Add support for loading adapters from HuggingFace Model Hub (calpt via 162)
- Add method to push adapters to HuggingFace Model Hub (calpt via 197)
- **[Learn more](https://docs.adapterhub.ml/huggingface_hub.html)**

`BatchSplit` adapter composition
- `BatchSplit` composition block for adapters and heads (hSterz via 177)
- **[Learn more](https://docs.adapterhub.ml/adapter_composition.html#batchsplit)**

Various new features
- Add automatic conversion of static heads when loaded via XModelWithHeads (calpt via 181)
**[Learn more](https://docs.adapterhub.ml/prediction_heads.html#automatic-conversion)**
- Add `list_adapters()` method to search for adapters (calpt via 193)
**[Learn more](https://docs.adapterhub.ml/loading.html#finding-pre-trained-adapters)**
- Add delete_adapter(), delete_adapter_fusion() and delete_head() methods (calpt via 189)
- MAD-X 2.0 WikiAnn NER notebook (hSterz via 187)
- Upgrade of underlying transformers version (hSterz via 183, calpt via 194 & 200)

Changed
- Deprecate add_fusion() and train_fusion() in favor of add_adapter_fusion() and train_adapter_fusion() (calpt via 190)

Fixed
- Suppress no-adapter warning when adapter_names is given (calpt via 186)
- `leave_out` in `load_adapter()` when loading language adapters from Hub (hSterz via 177)

adapters2.0.1

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.