Inplace-abn

Latest version: v1.1.0

Safety actively analyzes 629678 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 4

1.0.1

This update adds back support for mixed precision training. These combinations of inputs / parameters are now supported:
- `float32` input, `float32` weight and bias
- `float64` input, `float64` weight and bias
- `float16` input, `float16` weight and bias
- `float16` input, `float32` weight and bias

**Note**: in the `float16` cases all internal operations are still performed with `float32` math, and `float16` is not supported when operating in CPU mode.

1.0.0

This release marks some major changes in `inplace_abn`:
- Complete rewrite of the CUDA code following the most recent native BN implementation from Pytorch
- Improved synchronized BN implementation, correctly handling different per-GPU batch sizes and Pytorch distributed groups
- The iABN layers are now packaged in an installable python library to simplify use in other projects
- The Imagenet / Vistas scripts are still available in the `scripts` folder

0.1.1

We added the possibility of training ResNet with inplace ABN layers.

In addition we released ResNet34 and ResNet50 pre-trained on ImageNet.

0.1

This is a code refactoring to enable compatibility with Pytorch v1.0.

Additional changes:
- Moved from multi-threading training to distributed training using multiple processes
- We provide an adapted implementation of synchronized inplace ABN
- Our inplace ABN layer is compatible with fp16 tensors.

0.0.3

This is a partial code refactoring to enable compatibility with Pytorch v0.4.1. In particular:
- Fixed compatibility with pytorch>=0.4.1 due to change of AT_ASSERT
- Fixed GPU allocation of tensors created in CUDA code

Additional changes:
- Added segmentation models and scripts to run inference on Vistas
- Updated license

0.0.2

This is a partial code refactoring to enable compatibility with Pytorch v0.4. In particular:
- Native functions have been rewritten to use the new ATen-based extension interface introduced in v0.4. As a side effect, the native code doesn't need to be pre-compiled anymore. Instead, we are now using Pytorch's newly introduced run-time library loading mechanism.
- The python code has been modified to account for the fact that `autograd.Variable` does not exist anymore.

Additional changes:
- ABN modules have been slightly refactored, leading to a slight change in the structure of the overall models' `state_dict`s. As a consequence, pre-trained models need to be re-downloaded (updated links in `README.md`).

Page 3 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.