Accelerate

Latest version: v0.30.1

Safety actively analyzes 630390 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 14

0.7.0

Logging API

Use any of your favorite logging libraries (TensorBoard, Wandb, CometML...) with just a few lines of code inside your training scripts with Accelerate. All details are in the [documentation](https://huggingface.co/docs/accelerate/tracking).

* Add logging capabilities by muellerzr in https://github.com/huggingface/accelerate/pull/293

Support for FSDP (fully sharded DataParallel)

PyTorch recently released a new model wrapper for sharded DDP training called [FSDP](https://pytorch.org/docs/stable/fsdp.html). This release adds support for it (note that it doesn't work with mixed precision yet). See all caveats in the [documentation](https://huggingface.co/docs/accelerate/fsdp).

* PyTorch FSDP Feature Incorporation by pacman100 in https://github.com/huggingface/accelerate/pull/321

Batch size finder

Say goodbye to the CUDA OOM errors with the new `find_executable_batch_size` decorator. Just decorate your training function and pick a starting batch size, then let Accelerate do the rest.

* Add a memory-aware decorator for CUDA OOM avoidance by muellerzr in https://github.com/huggingface/accelerate/pull/324

Examples revamp

The [Accelerate examples](https://github.com/huggingface/accelerate/tree/main/examples) are now split in two: you can find in the base folder a very simple nlp and computer vision examples, as well as complete versions incorporating all features. But you can also browse the examples in the `by_feature` subfolder, which will show you exactly what code to add for each given feature (checkpointing, tracking, cross-validation etc.)

* Refactor Examples by Feature by muellerzr in https://github.com/huggingface/accelerate/pull/312

What's Changed
* Document save/load state by muellerzr in https://github.com/huggingface/accelerate/pull/290
* Refactor precisions to its own enum by muellerzr in https://github.com/huggingface/accelerate/pull/292
* Load model and optimizet states on CPU to void OOMs by sgugger in https://github.com/huggingface/accelerate/pull/299
* Fix example for datasets v2 by sgugger in https://github.com/huggingface/accelerate/pull/298
* Leave default as None in `mixed_precision` for launch command by sgugger in https://github.com/huggingface/accelerate/pull/300
* Pass `lr_scheduler` to `Accelerator.prepare` by sgugger in https://github.com/huggingface/accelerate/pull/301
* Create new TestCase classes and clean up W&B tests by muellerzr in https://github.com/huggingface/accelerate/pull/304
* Have custom trackers work with the API by muellerzr in https://github.com/huggingface/accelerate/pull/305
* Write tests for comet_ml by muellerzr in https://github.com/huggingface/accelerate/pull/306
* Fix training in DeepSpeed by sgugger in https://github.com/huggingface/accelerate/pull/308
* Update example scripts by muellerzr in https://github.com/huggingface/accelerate/pull/307
* Use --no_local_rank for DeepSpeed launch by sgugger in https://github.com/huggingface/accelerate/pull/309
* Fix Accelerate CLI CPU option + small fix for W&B tests by muellerzr in https://github.com/huggingface/accelerate/pull/311
* Fix DataLoader sharding for deepspeed in accelerate by m3rlin45 in https://github.com/huggingface/accelerate/pull/315
* Create a testing framework for example scripts and fix current ones by muellerzr in https://github.com/huggingface/accelerate/pull/313
* Refactor Tracker logic and write guards for logging_dir by muellerzr in https://github.com/huggingface/accelerate/pull/316
* Create Cross-Validation example by muellerzr in https://github.com/huggingface/accelerate/pull/317
* Create alias for Accelerator.free_memory by muellerzr in https://github.com/huggingface/accelerate/pull/318
* fix typo in docs of accelerate tracking by loubnabnl in https://github.com/huggingface/accelerate/pull/320
* Update examples to show how to deal with extra validation copies by muellerzr in https://github.com/huggingface/accelerate/pull/319
* Fixup all checkpointing examples by muellerzr in https://github.com/huggingface/accelerate/pull/323
* Introduce reduce operator by muellerzr in https://github.com/huggingface/accelerate/pull/326

New Contributors
* m3rlin45 made their first contribution in https://github.com/huggingface/accelerate/pull/315
* loubnabnl made their first contribution in https://github.com/huggingface/accelerate/pull/320
* pacman100 made their first contribution in https://github.com/huggingface/accelerate/pull/321

**Full Changelog**: https://github.com/huggingface/accelerate/compare/v0.6.0...v0.7.0

0.6.2

The launcher was ignoring the mixed precision attribute of the config since v0.6.0. This patch fixes that.

0.6.1

Patches an issue with mixed precision (see 286)

0.6.0

This release adds support for bloat16 mixed precision training (requires PyTorch >= 1.10) and a brand-new checkpoint utility to help with resuming interrupted trainings. We also get a completely revamped [documentation frontend](https://huggingface.co/docs/accelerate/index).

Checkpoints

Save the current state of all your objects (models, optimizers, RNG states) with `accelerator.save_state(path_to_checkpoint)` and reload everything by calling `accelerator.load_state(path_to_checkpoint)`

* Add in checkpointing capability by muellerzr in https://github.com/huggingface/accelerate/pull/255
* Implementation of saving and loading custom states by muellerzr in https://github.com/huggingface/accelerate/pull/270

BFloat16 support

Accelerate now supports bfloat16 mixed precision training. As a result the old `--fp16` argument has been deprecated to be replaced by the more generic `--mixed-precision`.

* Add bfloat16 support 243 by ikergarcia1996 in https://github.com/huggingface/accelerate/pull/247

New env subcommand

You can now type `accelerate env` to have a copy-pastable summary of your environment and default configuration. Very convenient when opening a new issue!

* add env command by johnnv1 in https://github.com/huggingface/accelerate/pull/280

New doc frontend

The documentation has been switched to the new Hugging Face frontend, like Transformers and Datasets.

* Convert documentation to the new front by sgugger in https://github.com/huggingface/accelerate/pull/271

What's Changed

* Fix send_to_device with non-tensor data by sgugger in https://github.com/huggingface/accelerate/pull/177
* Handle UserDict in all utils by sgugger in https://github.com/huggingface/accelerate/pull/179
* Use collections.abc.Mapping to handle both the dict and the UserDict types by mariosasko in https://github.com/huggingface/accelerate/pull/180
* fix: use `store_true` on argparse in nlp example by monologg in https://github.com/huggingface/accelerate/pull/183
* Update README.md by TevenLeScao in https://github.com/huggingface/accelerate/pull/187
* Add signature check for `set_to_none` in Optimizer.zero_grad by sgugger in https://github.com/huggingface/accelerate/pull/189
* fix typo in code snippet by MrZilinXiao in https://github.com/huggingface/accelerate/pull/199
* Add high-level API reference to README by Chris-hughes10 in https://github.com/huggingface/accelerate/pull/204
* fix rng_types in accelerator by s-kumano in https://github.com/huggingface/accelerate/pull/206
* Pass along drop_last in DispatchDataLoader by sgugger in https://github.com/huggingface/accelerate/pull/212
* Rename state to avoid name conflicts with pytorch's Optimizer class. by yuxinyuan in https://github.com/huggingface/accelerate/pull/224
* Fix lr scheduler num samples by sgugger in https://github.com/huggingface/accelerate/pull/227
* Add customization point for init_process_group kwargs by sgugger in https://github.com/huggingface/accelerate/pull/228
* Fix typo in installation docs by jaketae in https://github.com/huggingface/accelerate/pull/234
* make deepspeed optimizer match parameters of passed optimizer by jmhessel in https://github.com/huggingface/accelerate/pull/246
* Upgrade black to version ~=22.0 by LysandreJik in https://github.com/huggingface/accelerate/pull/250
* add support of gather_object by ZhiyuanChen in https://github.com/huggingface/accelerate/pull/238
* Add launch flags --module and --no_python (256) by parameter-concern in https://github.com/huggingface/accelerate/pull/258
* Accelerate + Animus/Catalyst = 🚀 by Scitator in https://github.com/huggingface/accelerate/pull/249
* Add `debug_launcher` by sgugger in https://github.com/huggingface/accelerate/pull/259
* enhance compatibility of honor type by ZhiyuanChen in https://github.com/huggingface/accelerate/pull/241
* Add a flag to use CPU only in the config by sgugger in https://github.com/huggingface/accelerate/pull/263
* Basic fixes for DeepSpeed by sgugger in https://github.com/huggingface/accelerate/pull/264
* Ability to set the seed with randomness from inside Accelerate by muellerzr in https://github.com/huggingface/accelerate/pull/266
* Don't use dispatch_batches when torch is < 1.8.0 by sgugger in https://github.com/huggingface/accelerate/pull/269
* Make accelerated model with AMP possible to pickle by BenjaminBossan in https://github.com/huggingface/accelerate/pull/274
* Contributing guide by LysandreJik in https://github.com/huggingface/accelerate/pull/254
* replace texts and link (master -> main) by johnnv1 in https://github.com/huggingface/accelerate/pull/282
* Use workflow from doc-builder by sgugger in https://github.com/huggingface/accelerate/pull/275
* Pass along execution info to the exit of autocast by sgugger in https://github.com/huggingface/accelerate/pull/284

New Contributors
* mariosasko made their first contribution in https://github.com/huggingface/accelerate/pull/180
* monologg made their first contribution in https://github.com/huggingface/accelerate/pull/183
* TevenLeScao made their first contribution in https://github.com/huggingface/accelerate/pull/187
* MrZilinXiao made their first contribution in https://github.com/huggingface/accelerate/pull/199
* Chris-hughes10 made their first contribution in https://github.com/huggingface/accelerate/pull/204
* s-kumano made their first contribution in https://github.com/huggingface/accelerate/pull/206
* yuxinyuan made their first contribution in https://github.com/huggingface/accelerate/pull/224
* jaketae made their first contribution in https://github.com/huggingface/accelerate/pull/234
* jmhessel made their first contribution in https://github.com/huggingface/accelerate/pull/246
* ikergarcia1996 made their first contribution in https://github.com/huggingface/accelerate/pull/247
* ZhiyuanChen made their first contribution in https://github.com/huggingface/accelerate/pull/238
* parameter-concern made their first contribution in https://github.com/huggingface/accelerate/pull/258
* Scitator made their first contribution in https://github.com/huggingface/accelerate/pull/249
* muellerzr made their first contribution in https://github.com/huggingface/accelerate/pull/255
* BenjaminBossan made their first contribution in https://github.com/huggingface/accelerate/pull/274
* johnnv1 made their first contribution in https://github.com/huggingface/accelerate/pull/280

**Full Changelog**: https://github.com/huggingface/accelerate/compare/v0.5.1...v0.6.0

0.5.1

Fix the two following bugs:
- `convert_to_fp32` returned booleans instead of tensors 173
- wrong dataloader lenght when `dispatch_batches=True` 175

0.5.0

This release introduces support for iterating through a `DataLoader` only on the main process, that then dispatches the batches to all processes.

Dispatch batches from main DataLoader

The motivation behind this come from dataset streaming which introduces two difficulties:
- there might be some timeouts for some elements of the dataset, which might then be different in each process launched, thus it's impossible to make sure the data is iterated though the same way on each process
- when using IterableDataset, each process goes through the dataset, thus applies the preprocessing on all elements. This can yield to the training being slowed down by this preprocessing.

This new feature is activated by default for all `IterableDataset`.

- Central dataloader 164 (sgugger)
- Dynamic default for `dispatch_batches` 168 (sgugger)

Various fixes

- fix fp16 covert back to fp32 for issue: unsupported operand type(s) for /: 'dict' and 'int' 149 (Doragd)
- [Docs] Machine config is yaml not json 151 (patrickvonplaten)
- Fix gather for 0d tensor 152 (sgugger)
- [DeepSpeed] allow untested optimizers deepspeed 150 (patrickvonplaten)
- Raise errors instead of warnings with better tests 170 (sgugger)

Page 7 of 14

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.