Python

pytorch-ignite

Latest version: v0.4.11

PyUp actively tracks 511,114 Python packages for vulnerabilities to keep your Python environments secure.

Scan your dependencies

Page 1 of 4

0.4.11

New Features

Engine and Events

- Added `before` and `after` events filters (2727)
- Can mix `every` and `before`/`after` event filters (2860)
- `once` event filter can accept a sequence of int (2858)

python
"once" event filter
engine.on(Events.ITERATION_STARTED(once=[50, 60]))
def call_once(engine):
do something on 50th and 60th iterations

"before" and "after" event filter
engine.on(Events.EPOCH_STARTED(after=10, before=30))
def call_after_and_before(engine):
do something in 11 to 29 epoch

Mixing "every" and "before" / "after" event filters
engine.on(Events.EPOCH_STARTED(every=5, after=8, before=25))
def call_after_and_before_every(engine):
do something on 9, 14, 19, 24 epochs


- Improved deterministic engine (2756)
- Grad accumulation should not effect value of loss (2737)
- Added `model_transform` in `create supervised trainer` (2848)

Distributed module

- Updated `idist.all_gather` to take `group` arg (2715)
- Updated `idist.all_reduce` to take `group` arg (2712)
- Added `idist.new_group` method (2711)

Metrics and handlers

- Updated `LRFinder` to have more than one parameter (2704)
- Added `get_param` method to `ParamGroupScheduler` (2720)
- Updated Polyaxon_logger (2776)
- Dropped `TrainsLoger` and `TrainsSaver` also removed the BC code (2742)
- Refactored PSNR and SSIM (2797)
- **[BC-breaking]** Aligned SSIM output with PSNR output, both give tensors (2794)
- Added distributed support to `RocCurve` (2802)
- Refactored `EpochMetric` and made it idempotent (2800)

Bug fixes

- Fixed device issue with metric tests SSIM, updated PSNR (2796)
- Fixed `LRScheduler` issue and fixed CI (2780)
- Fixed the code and now raise `ModuleNotFoundError` instead of `RuntimeError` (2750)
- Fixed `sync_all_reduce` to cover update->compute->update case (2803)

Housekeeping (docs, CI, examples, tests, etc)

- 2875, 2872, 2871, 2869, 2868, 2867, 2866, 2864, 2863, 2854, 2852, 2840, 2849, 2844, 2839, 2838, 2835, 2826, 2822, 2820, 2807, 2805, 2795, 2788, 2787, 2798, 2793, 2790, 2786, 2778, 2777, 2765, 2760, 2759, 2757, 2751, 2750, 2748, 2741, 2739, 2736, 2730, 2729, 2726, 2724, 2722, 2721, 2719, 2718, 2717, 2706, 2705, 2701, 2432

- Drop python 3.7 from CI (2836)

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

DeepC004, JakubDz2208, Moh-Yakoub, RishiKumarRay, abhi-glitchhg, crj1998, guptaaryan16, louis-she, pacificdragon, puhuk, sadra-barikbin, sallycaoyu, soma2000-lang, theory-in-progress, vfdev-5, ydcjeff

New Contributors
* JakubDz2208 made their first contribution in https://github.com/pytorch/ignite/pull/2704
* soma2000-lang made their first contribution in https://github.com/pytorch/ignite/pull/2742
* guptaaryan16 made their first contribution in https://github.com/pytorch/ignite/pull/2786
* RishiKumarRay made their first contribution in https://github.com/pytorch/ignite/pull/2790
* crj1998 made their first contribution in https://github.com/pytorch/ignite/pull/2794
* abhi-glitchhg made their first contribution in https://github.com/pytorch/ignite/pull/2835
* sallycaoyu made their first contribution in https://github.com/pytorch/ignite/pull/2849
* DeepC004 made their first contribution in https://github.com/pytorch/ignite/pull/2858
* pacificdragon made their first contribution in https://github.com/pytorch/ignite/pull/2863

0.4.10

New Features

Engine

- Added Engine interrupt/continue feature (2699, 2682)

Example:
python
from ignite.engine import Engine, Events

data = range(10)
max_epochs = 3

def check_input_data(e, b):
print(f"Epoch {engine.state.epoch}, Iter {engine.state.iteration} | data={b}")
i = (e.state.iteration - 1) % len(data)
assert b == data[i]

engine = Engine(check_input_data)

engine.on(Events.ITERATION_COMPLETED(every=11))
def call_interrupt():
engine.interrupt()

print("Start engine run with interruptions:")
state = engine.run(data, max_epochs=max_epochs)
print("1 Engine run is interrupted at ", state.epoch, state.iteration)
state = engine.run(data, max_epochs=max_epochs)
print("2 Engine run is interrupted at ", state.epoch, state.iteration)
state = engine.run(data, max_epochs=max_epochs)
print("3 Engine ended the run at ", state.epoch, state.iteration)


<details>
<summary>
Output
</summary>


Start engine run with interruptions:
Epoch 1, Iter 1 | data=0
Epoch 1, Iter 2 | data=1
Epoch 1, Iter 3 | data=2
Epoch 1, Iter 4 | data=3
Epoch 1, Iter 5 | data=4
Epoch 1, Iter 6 | data=5
Epoch 1, Iter 7 | data=6
Epoch 1, Iter 8 | data=7
Epoch 1, Iter 9 | data=8
Epoch 1, Iter 10 | data=9
Epoch 2, Iter 11 | data=0
1 Engine run is interrupted at 2 11
Epoch 2, Iter 12 | data=1
Epoch 2, Iter 13 | data=2
Epoch 2, Iter 14 | data=3
Epoch 2, Iter 15 | data=4
Epoch 2, Iter 16 | data=5
Epoch 2, Iter 17 | data=6
Epoch 2, Iter 18 | data=7
Epoch 2, Iter 19 | data=8
Epoch 2, Iter 20 | data=9
Epoch 3, Iter 21 | data=0
Epoch 3, Iter 22 | data=1
2 Engine run is interrupted at 3 22
Epoch 3, Iter 23 | data=2
Epoch 3, Iter 24 | data=3
Epoch 3, Iter 25 | data=4
Epoch 3, Iter 26 | data=5
Epoch 3, Iter 27 | data=6
Epoch 3, Iter 28 | data=7
Epoch 3, Iter 29 | data=8
Epoch 3, Iter 30 | data=9
3 Engine ended the run at 3 30

</details>



- Deprecated and replaced `Events.default_event_filter` with None (2644)
- [**BC-breaking**] Rewritten Engine's `terminate` and `terminate_epoch` logic (2645)
- Improved logging time taken message showing milliseconds (2650)

Metrics and handlers

- Added ZeRO built-in support to `Checkpoint` in a distributed configuration (2658, 2642)
- Added `save_on_rank` argument to `DiskSaver` and `Checkpoint` (2641)
- Added a `handle_buffers` option for `EMAHandler` (2592)
- Improved Precision and Recall metrics (2573)

Bug fixes

- Median metrics (e.g median absolute error) are now using `np.median`-compatible torch median implementation (2681)
- Fixed issues when removing handlers on filtered events (2690)
- Few minor fixes in Engine and Event (2680)
- [**BC-breaking**] Fixed `Engine.terminate()` behaviour when resumed (2678)


Housekeeping (docs, CI, examples, tests, etc)

- 2700, 2698, 2696, 2695, 2694, 2691, 2688, 2679, 2676, 2675, 2673, 2671, 2670, 2668, 2667, 2666, 2665, 2664, 2662, 2660, 2659, 2657, 2656, 2655, 2653, 2652, 2651, 2647, 2646, 2640, 2639, 2637, 2630, 2629, 2628, 2625, 2624, 2620, 2618, 2617, 2616, 2613, 2611, 2609, 2606, 2605, 2604, 2601, 2597, 2584, 2581, 2542

- Metrics tests improvements in DDP configuration


Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

BowmanChow, daniellepintz, haochunchang, kamalojasv181, puhuk, sadra-barikbin, sandylaker, sdesrozis, vfdev-5

0.4.9

New Features

- Added `whitelist` argument to log only desired weights/grads with experiment tracking system handlers: 2550, 2523
- Added `ReduceLROnPlateauScheduler` parameter scheduler: 2449
- Added filename components in `Checkpoint`: 2498
- Added missing args to `ModelCheckpoint`, parity with `Checkpoint`: 2486
- **[BC-breaking]** `LRScheduler` is now attachable to `Events.ITERATION_STARTED`: 2496

Bug fixes

- Fixed `zero_grad` place in `create_supervised_trainer` resulting in grad zero logs: 2560, 2559, 2555, 2547
- Fixed bug in `Checkpoint` when loading a single non-`nn.Module` object: 2487
- Removed warning in DDP if `Metric.reset/update` are not decorated: 2549
- **[BC-breaking]** Fixed SSIM metric implementation and issue with variable batch inputs: 2564, 2563
- `compute` method now returns `float` instead of `torch.Tensor`

Housekeeping (docs, CI, examples, tests, etc)

- 2552, 2543, 2541, 2534, 2531, 2530, 2529, 2528, 2526, 2525, 2521, 2518, 2512, 2509, 2507, 2506, 2497, 2494, 2493, 2490, 2485, 2483, 2477, 2476, 2474, 2473, 2469, 2463, 2461, 2460, 2457, 2454, 2450, 2448, 2446, 2445, 2442, 2440, 2439, 2435, 2433, 2431, 2430, 2428, 2427,

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

Davidportlouis, DevPranjal, Ishan-Kumar2, KevinMusgrave, Moh-Yakoub, asmayer, divo12, gorarakelyan, jreese, leotac, nishantb06, nmcguire101, sadra-barikbin, sayantan1410, sdesrozis, vfdev-5, yuta0821

0.4.8

New Features

- Added data as None option to `Engine.run` (2369)
- Now `Checkpoint.load_objects` can accept `str` and load the checkpoint internally (2305)


Bug fixes

- Fixed issue with `DeterministicEngine.state_dict()` (2412)
- Fixed `EMAHandler` warm-up behaviour (2333)
- Fixed `_compute_nproc_per_node` in case of bad dist configuration (2288)
- Fixed state parameter scheduler to work with `EMAHandler` (2326)
- Fixed a bug on `StateParamScheduler.attach` method (2316)
- Fixed `ClearMLLogger` to retrieve current task before trying to create a new one (2344)
- Added hashing a checkpoint utility: 2272, 2283, 2273
- Fixed config check issue with multi-node spawn method (2424)

Housekeeping (docs, CI, examples, tests, etc)

- Added doctests for docstrings: 2241, 2402, 2400, 2399, 2395, 2394, 2391, 2389, 2384, 2352, 2351, 2349, 2348, 2347, 2346, 2345, 2341, 2340, 2336, 2335, 2332, 2327, 2324, 2323, 2321, 2317, 2311, 2307, 2290, 2284, 2280
- 2420, 2411, 2409, 2404, 2392, 2382, 2380, 2378, 2377, 2374, 2371, 2370, 2365, 2362, 2360, 2359, 2357, 2355, 2334, 2331, 2329, 2308, 2297, 2292, 2285, 2279, 2278, 2277, 2270, 2264, 2261, 2252,


Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

Abo7atm, DevPranjal, Eunjnnn, FarehaNousheen, H4dr1en, Ishan-Kumar2, KickItLikeShika, Priyansi, bibhabasumohapatra, fco-dv, louis-she, sandylaker, sdesrozis, trsvchn, vfdev-5, ydcjeff

0.4.7

New Features

- Enabled `LRFinder` to run multiple epochs (2200)
- `save_handler` automatically detects `DiskSaver` when path passed (2198)
- Improved `Checkpoint` to use `score_name` as metric's key (2146)
- Added `State` parameter scheduler (2090)
- Added state attributes for loggers (tqdm, Polyaxon, MLFlow, WandB, Neptune, Tensorboard, Visdom, ClearML) (2162, 2161, 2160, 2154, 2153, 2152, 2151, 2148, 2140, 2137)
- Added gradient accumulation to supervised training step functions (2223)
- Automatic jupyter environment detection (2188)
- Added an additional argument to `auto_optim` to allow gradient accumulation (2169)
- Added micro averaging for Bleu Score (2179)
- Expanded BLEU, ROUGE to be calculated on batch input (2259, 2180)
- Moved `BasicTimeProfiler`, `HandlersTimeProfiler`, `ParamScheduler`, `LRFinder` to core (2136, 2135, 2132)

Bug fixes

- Fixed docstring examples with huge bottom padding (2225)
- Fixed NCCL warning caused by barrier if using idist (2257, 2254)
- Fixed hostname list expansion (2208, 2204)
- Fixed tcp error with PyTorch v1.9.1 (2211)

Housekeeping (docs, CI, examples, tests, etc)

- 2243, 2242, 2228, 2164, 2222, 2221, 2220, 2219, 2218, 2217, 2216, 2173, 2164, 2207, 2236, 2190, 2256, 2196, 2177, 2166, 2155, 2149, 2234, 2206, 2186, 2176, 2246, 2231, 2182, 2192, 2165, 2227, 2253, 2247, 2250, 2226, 2201, 2184, 2142, 2232, 2238, 2174

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

Chandan-h-509, Ishan-Kumar2, KickItLikeShika, Priyansi, fco-dv, gucifer, kennethleungty, logankilpatrick, mfoglio, sandylaker, sdesrozis, theory-in-progress, toxa23, trsvchn, vfdev-5, ydcjeff

0.4.6

New Features

- Added `start_lr` option to `FastaiLRFinder` (2111)
- Added Model's EMA handler (2098, 2102)
- Improved SLURM support: added hostlist expansion without using `scontrol` (2092)

Metrics

- Added Inception Score (2053)
- Added FID metric (2049, 2061, 2085, 2094, 2103)
- Blog post "GAN Evaluation : the Frechet Inception Distance and Inception Score metrics" (https://pytorch-ignite.ai/posts/gan-evaluation-with-fid-and-is/)
- Improved DDP support for metrics (2096, 2083)
- Improved `MetricsLambda` to work with `reset/update/compute` API (2091)

Bug fixes

- Modified `auto_dataloader` to not wrap user provided `DistributedSampler` (2119)
- Raise error in `DistributedProxySampler` when sampler is already a `DistributedSampler` (2120)
- Improved LRFinder error message (2127)
- Added `py.typed` for type checkers (2095)

Housekeeping

- 2123, 2117, 2116, 2110, 2093, 2086

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

01-vyom, KickItLikeShika, gucifer, sandylaker, schuhschuh, sdesrozis, trsvchn, vfdev-5, ydcjeff

Page 1 of 4