Changelogs » Pytorch-toolbelt

PyUp Safety actively tracks 232,000 Python packages for vulnerabilities and notifies you when to upgrade.

Pytorch-toolbelt

1903.01347


        

0.3.2

New features
  
  * Many helpful callbacks for Catalyst library: HyperParameterCallback, LossAdapter to name a few.
  * New losses for deep model supervision (Helpful, when size of target and output mask are different)
  * Stacked Hourglass encoder
  * Context Aggregation Network decoder
  
  Breaking Changes
  
  * ABN module will now resolve as nn.Sequential(BatchNorm2d, Activation) instead of a hand-crafted module. This enables easier conversion of batch normalization modules to the nn.SyncBatchNorm.
  
  * Almost every Encoder/Decoder implementation has been refactored for better clarity and flexibility. Please double-check your pipelines.
  
  Important bugfixes
  
  * Improved numerical stability of Dice / Jaccard losses (Using log_sigmoid() + exp() instead of plain sigmoid() )
  
  
  Other
  
  * A lots of comments for functions and modules
  * Code cleanup, thanks for DeepSource
  * Type annotations for modules and functions
  * Update of README

0.3.1

Fixes
  
  * Fixed bug in computation IoU metric in `binary_dice_iou_score` function
  * Fixed incorrect default value in `SoftCrossEntropyLoss` 38
  
  Improvements
  
  * Function `draw_binary_segmentation_predictions` now has parameter `image_format` (`rgb`|`bgr`|`gray`) to specify format of the image to visualize correctly images in TB
  * More type annotations across the codebase
  
  
  New features
  
  * New visualization function `draw_multilabel_segmentation_predictions`

0.3.0

New features
  
  Encoders
  
  * HRNetV2
  * DenseNets
  * EfficientNet
  * `Encoder` class has `change_input_channels` method to change number of channels in input image
  
  New losses
  
  * `BCELoss` with support of `ignore_index`
  * `SoftBCELoss` (Label smoothing loss for binary case with support of `ignore_index`)
  * `SoftCrossEntropyLoss` (Label smoothing loss for multiclass case with support of `ignore_index`)
  
  Catalyst goodies
  
  * Online pseudolabeling callback
  * Training signal annealing callback
  
  Other
  
  * New activation functions support in `ABN` block: Swish, Mish, HardSigmoid
  * New decoders (Unet, FPN, DeeplabV3, PPM) to simplify creation of segmentation models
  * `CREDITS.md` to include all the references to code/articles. Existing list is definitely not complete, so feel free to make PR's
  * Object context block from OCNet
  
  API changes
  
  * Focal loss now supports normalized focal loss and reduced focal loss extensions.
  * Optimize computation of pyramid weight matrix 34
  * Default value `align_corners=False` in `F.interpolate` when doing bilinear upsampling.
  
  Bugfixes
  
  * Fix missing call to batch normalization block in `FPNBottleneckBN`
  * Fix numerical stability for `DiceLoss` and `JaccardLoss` when `log_loss=True`
  * Fix numerical stability when computing normalized focal loss

0.2.1

New features
  
  - Added normalized focal loss
  
  Bugfixes
  
  - Fixed wrong shape of intermediate layers of DenseNet

0.2.0

Catalyst contrib
  
  - Refactor Dice/IoU loss into single metric `IoUMetricsCallback` with a few cool features: `metric="dice|jaccard"` to choose what metric should be used; `mode=binary|multiclass|multilabel` to specify problem type (binary, multiclass or multi-label segmentation)'; `classes_of_interest=[1,2,4]` to select for which set of classes metric should be computed and `nan_score_on_empty=False` to compute `Dice Accuracy` (Counts as a 1.0 if both `y_true` and `y_pred` are empty; 0.0 if `y_pred` is not empty).
  - Added L-p regularization callback to apply L1 and L2 regularization to model with support of regularization strength scheduling.
  
  
  Losses
  
  - Refactor `DiceLoss`/`JaccardLoss` losses in a same fashion as metrics.
  
  Models
  
  - Add Densenet encoders
  - Bugfix: Fix missing BN+Relu in `UNetDecoder`
  - Global pooling modules can squeeze spatial channel dimensions if `flatten=True`.
  
  Misc
  
  - Add more unit tests
  - Code-style is now managed with Black
  - `to_numpy` now supports `int`, `float` scalar types

0.1.4

* Minor release to update Catalyst contrib modules to latest Catalyst (requires catalyst>=19.8)

0.1.3

1. Added `ignore_index` for focal loss
  2. Added `ignore_index` to some metrics for Catalyst
  3. Added `tif` extension for `find_images_in_dir`

0.1.1

New functionality / breaking changes
  * Added visualization functions to render best/worst batches for binary and semantic segmentation.
  * JaccardScoreCallback now is a single callback for computing IoU for binary/multiclass/multilabel segmentation.
  * Added HFF module (Hierarchical feature fusion).
  * Added `set_trainable` function to enable/disabled training and batch-norm on module and it's childs.
  * RLE encoding/decoding (Hi, Kaggle)
  
  API changes
  * `rgb_image_from_tensor` now accepts `dtype` parameters for returned image
  
  Bugfixes
  * Fixed wrong implementation of UpsampleAddConv (There was extra residual connection)

0.1.0

New stuff:
  1. EfficientNet
  2. Multiscale TTA module
  3. New activations: Swish, HardSwish, HardSigmoid
  4. AGN module (Activated Group Norm), mimicks ABN
  
  Changes:
  1. `SpatialGate2d` now accepts `squeeze_channels` for explicit number of squeeze channels.
  
  Misc
  1. Code formatting

0.0.9

* Refactoring of activation functions factory method (for upcoming model builder)
  * Cosmetic changes in logging

0.0.8

* Global pooling, SCSE module and MobileNetV3 encoders are not ONNX and CoreML friendly.
  * Refactored FPN module for more flexible `interpolate_add` tuning (can use any module with two inputs)

0.0.7

Added MobileNetV3 encoder (implementation credits to https://github.com/Randl/MobileNetV3-pytorch)

0.0.6

New features
  
  1. Added WiderResNet & WiderResNetA2 encoders (https://github.com/mapillary/inplace_abn)

0.0.5

Changes
  
  - Added 10-Crop TTA (https://github.com/BloodAxe/pytorch-toolbelt/issues/4)
  - Added unit tests for TTA functions
  - Added `freeze_bn` function to freeze all BN layers in a model
  - Rename `unpad_tensor` to `unpad_image_tensor` to mimick `pad_image_tensor`
  
  Bugfixes
  
  - Fixed bug in `d4_image2mask`

0.0.4

API Changes
  
  1. Refactored TTA interface

0.0.3

Initial release