Pytorch-ignite

Latest version: v0.4.13

Safety actively analyzes 613544 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 4

0.4.2

Core

New Features and bug fixes

- Added SSIM metric (1217)
- Added prebuilt Docker images (1218)
- Added distributed support for `EpochMetric` and related metrics (1229)
- Added `required_output_keys` public attribute (1291)

- Pre-built [docker images](https://hub.docker.com/u/pytorchignite) for computer vision and nlp tasks
powered with Nvidia/Apex, Horovod, MS DeepSpeed (1304 1248 1218 )


Handlers and utils

- Allow passing keyword arguments to save function on `Checkpoint` (1245)

Distributed helper module

- Added support of Horovod (1195)
- Added `idist.broadcast` (1237)
- Added `sync_bn` option to `idist.auto_model` (1265)

Contrib

New Features and bug fixes

- Added `EpochOutputStore` handler (1226)
- Improved displayed tag for tqdm progress bar (1279)
- Fixed bug with `ParamGroupScheduler` with schedulers based on different optimizers (1274)


And a lot of house-keeping Pre-September Hacktoberfest contributions

- Added initial Mypy check at CI step (1296)
- Fixed typo in docs (concepts) (1295)
- Fixed link to pytorch documents (1294)
- Removed prints from tests (1292)
- Downgraded tqdm version to stabilize the CI (1293)

---
Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

M3L6H, Tawishi, WrRan, ZhiliangWu, benji011, fco-dv, kamahori, kenjihiraoka, kilsenp, n2cholas, nzare, sdesrozis, theodumont, vfdev-5, ydcjeff,

0.4.1

Core

New Features and bug fixes

- Improved docs for custom events (1179)

Handlers and utils
- Added custom filename pattern for saving checkpoints (1127)

Distributed helper module
- Improved namings in _XlaDistModel (1173)
- Minor optimization for `idist.get_*` methods (1196)
- Fixed distributed proxy sampler runtime error (1192)
- Fixes bug using `idist` with "nccl" backend and torch cuda is not available (1166)
- Fixed issue with logging XLA tensors (1207)


Contrib

New Features and bug fixes
- Fixes warning about "TrainsLogger output_handler can not log metrics value" (1170)
- Improved usage of contrib common methods with other save handlers (1171)


Examples
- Improved Pascal Voc example (1193)



---
Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

Joel-hanson, WrRan, jspisak, marload, ryanwongsa, sdesrozis, vfdev-5

0.4.0.post1

0.4.0

Core

BC breaking changes

- Simplified engine - BC breaking change (940 939 938)
- no more internal patching of torch DataLoader.
- seed argument of `Engine.run` is deprecated.
- previous behaviour can be achieved with `DeterministicEngine`, introduced in 939.
- Make all `Events` be `CallableEventsWithFilter` (788).
- Make ignite compatible only with pytorch >1.0 (1016).
- ignite is tested on the latest and nightly versions of pytorch.
- exact compatibility with previous versions can be checked [here](https://github.com/pytorch/ignite/actions?query=workflow%3A.github%2Fworkflows%2Fpytorch-version-tests.yml).
- Remove deprecated arguments from `BaseLogger` (1051).
- Deprecated `CustomPeriodicEvent` (984).
- `RunningAverage` now computes output quantity average instead of a sum in DDP (991).
- Checkpoint stores now files with `.pt` extension instead of `.pth` (873).
- Arguments `archived` of `Checkpoint` and `ModelCheckpoint` are deprecated (873).
- Now `create_supervised_trainer` and `create_supervised_evaluator` do not move model to device (910).

New Features and bug fixes

Ignite Distributed [Experimental]
- Introduction of `ignite.distributed as idist` module (1045)
- common interface for distributed applications and helper methods, e.g. `get_world_size()`, `get_rank()`, ...
- supports native torch distributed configuration, XLA devices.
- metrics computation works in all supported distributed configurations: GPUs and TPUs.

Engine & Events
- Add flexibility on event handlers by packing triggering events (868).
- `Engine` argument is now optional in event handlers (889, 919).
- We initialize `engine.state` before calling `engine.run` (1028).
- `Engine` can run on dataloader based on `IterableDataset` and without specifying `epoch_length` (1077).
- Added user keys into Engine's state dict (914).
- Bug fixes in `Engine` class (1048, 994).
- Now `epoch_length` argument is optional (985)
- suitable to work with finite-unknown-length iterators.
- Added times in `engine.state` (958).

Metrics
- Add `Frequency` metric for ops/s calculations (760, 783, 976).
- Metrics computation can be customized with introduced `MetricUsage` (979, 1054)
- batch-wise/epoch-wise or customly programmed metric's update and compute methods.
- `Metric` can be detached (827).
- Fixed bug in `RunningAverage` when output is torch tensor (943).
- Improved computation performance of `EpochMetric` (967).
- Fixed average recall value of `ConfusionMatrix` (846).
- Now metrics can be serialized using `dill` (930).
- Added support for nested metric values (968).

Handlers and utils
- Checkpoint : improved filename when score value is Integer (758).
- Checkpoint : fix returning worst model of the saved models. (745).
- Checkpoint : `load_objects` can load single object checkpoints (772).
- Checkpoint : we now save only one checkpoint per priority (847).
- Checkpoint : added kwargs to `Checkpoint.load_objects` (861).
- Checkpoint : now saves `model.module.state_dict()` for DDP and DP (1086).
- Checkpoint and related: other improvements (937).
- Support namedtuple for `convert_tensor` (740).
- Added decorator `one_rank_only` (882).
- Update `common.py` (904).

Contrib

- Added `FastaiLRFinder` (596).

Metrics
- Added Roc Curve and Precision/Recall Curve to the metrics (875).

Parameters scheduling
- Enabled multi params group for `LRScheduler` (1027).
- Parameters scheduling improvements (1072, 859).

Support of experiment tracking systems
- Add `NeptuneLogger` (730, 821, 951, 954).
- Add `TrainsLogger` (1020, 1036, 1043).
- Add `WandbLogger` (926).
- Added `visdom_logger` to common module (796).
- TensorboardX is no longer mandatory if pytorch>=1.2 (858).
- Simplified `BaseLogger` attach APIs (1006).
- Added kwargs to loggers' constructors and respective setup functions (1015).

Time profiling
- Added basic time profiler to `contrib.handlers` (729).

Bug fixes (some of PRs)
- `ProgressBar` output not in sync with epoch counts (773).
- Fixed `ProgressBar.log_message` (768).
- `Progressbar` now accounts for `epoch_length` argument (785).
- Fixed broken `ProgressBar` if data is iterator without epoch length (995).
- Improved `setup_logger` for multiple calls (962).
- Fixed incorrect log position (1099).
- Added missing colon to logging message (1101).

Examples
- Basic example of `FastaiLRFinder` on MNIST (838).
- CycleGAN auto-mixed precision training example with NVidia/Apex or native `torch.cuda.amp` (888).
- Added `setup_logger` to mnist examples (953).
- Added MNIST example on TPU (956).
- Benchmark amp on Cifar100 (917).
- `TrainsLogger` semantic segmentation example (1095).

Housekeeping (some of PRs)
- Documentation updates (711, 727, 734, 736, 742, 743, 759, 798, 780, 808, 817, 826, 867, 877, 908, 909, 911, 928, 942, 986, 989, 1002, 1031, 1035, 1083, 1092).
- Offerings to the CI gods (713, 761, 762, 776, 791, 801, 803, 879, 885, 890, 894, 933, 981, 982, 1010, 1026, 1046, 1084, 1093).
- Test improvements (779, 807, 854, 891, 975, 1021, 1033, 1041, 1058).
- Added `Serializable` in mixins (1000).
- Merge of `EpochMetric` in `_BaseRegressionEpoch` (970).
- Adding typing to ignite (716, 751, 800, 844, 944, 1037).
- Drop Python 2 support finalized (806).
- Dynamic typing (723).
- Splits engine into multiple parts (724).
- Add Python 3.8 to Conda builds (781).
- Black formatted codebase with pre-commit files (792).
- Activate dpl v2 for Travis CI (804).
- AutoPEP8 (805).
- Fixes nightly version bug (809).
- Fixed device conversion method (887).
- Refactored deps installation (931).
- Return handler in helpers (997).
- Fixes 833 (1001).
- Disable propagation of loggers to ancestrors (1013).
- Consistent PEP8-compliant imports layout (901).


---
Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

Crissman, DhDeepLIT, GabrielePicco, InCogNiTo124, ItamarWilf, Joxis, Muhamob, Yevgnen, anmolsjoshi, bendboaz, bmartinn, cajanond, chm90, cqql, czotti, erip, fdlm, hoangmit, isolet, jakubczakon, jkhenning, kai-tub, maxfrei750, michiboo, mkartik, sdesrozis, sisp, vfdev-5, willfrey, xen0f0n, y0ast, ykumards

0.4rc.0.post1

0.3.0

Core
- Added State repr and input batch as engine.state.batch (641)
- Adapted core metrics only to be used in distributed configuration (635)
- Added fbeta metric as core metric (653)
- Added event filtering feature (e.g. every/once/event filter logic) (656)
- **BC breaking change**: Refactor ModelCheckpoint into Checkpoint + DiskSaver / ModelCheckpoint (673)
- Added option `n_saved=None` to store all checkpoints (703)
- Improved accumulation metrics (681)
- Early stopping min delta (685)
- Droped Python 2.7 support (699)
- Added feature: Metric can accept a dictionary (689)
- Added Dice Coefficient metric (680)
- Added helper method to simplify the setup of class loggers (712)

Engine refactoring (BC breaking change)

Finally solved the issue 62 to resume training from an epoch or iteration

- Engine refactoring + features (640)
- engine checkpointing
- variable epoch lenght defined by `epoch_length`
- two additional events: `GET_BATCH_STARTED` and `GET_BATCH_COMPLETED`
- [cifar10 example](https://github.com/pytorch/ignite/tree/v0.3.0/examples/contrib/cifar10#check-resume-training) with save/resume in distributed conf

Contrib
- Improved `create_lr_scheduler_with_warmup` (646)
- Added helper method to plot param scheduler values with matplotlib (650)
- **BC Breaking change**: with multiple optimizer's param groups (690)
- Added state_dict/load_state_dict (690)
- **BC Breaking change**: Let the user specify tqdm parameters for log_message (695)


Examples
- Added an example of hyperparameters tuning with Ax on CIFAR10 (652)
- Added CIFAR10 distributed example

Reproducible trainings as "References"

Inspired by torchvision/references, we provide several reproducible baselines for vision tasks:

- [ImageNet](https://github.com/pytorch/ignite/blob/master/examples/references/classification/imagenet)
- [Pascal VOC2012](https://github.com/pytorch/ignite/blob/master/examples/references/segmentation/pascal_voc2012)

Features:

- Distributed training with mixed precision by nvidia/apex
- Experiments tracking with MLflow or Polyaxon


---
Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

anubhavashok, kagrze, maxfrei750, vfdev-5

Page 3 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.