Torchvision

Latest version: v0.18.0

Safety actively analyzes 630217 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 22 of 23

0.2.1

This version introduces several fixes and improvements to the previous version.

Better printing of Datasets and Transforms

* Add descriptions to Transform objects.
python
Now T.Compose([T.RandomHorizontalFlip(), T.RandomCrop(224), T.ToTensor()]) prints
Compose(
RandomHorizontalFlip(p=0.5)
RandomCrop(size=(224, 224), padding=0)
ToTensor()
)

* Add descriptions to Datasets
python
now torchvision.datasets.MNIST('~') prints
Dataset MNIST
Number of datapoints: 60000
Split: train
Root Location: /private/home/fmassa
Transforms (if any): None
Target Transforms (if any): None

New transforms

* Add RandomApply, RandomChoice, RandomOrder transformations 402
* RandomApply: applies a list of transformation with a probability
* RandomChoice: choose randomly a single transformation from a list
* RandomOrder: apply transformations in a random order
* Add random affine transformation 411

* Add reflect, symmetric and edge padding to `transforms.pad` 460

Performance improvements

* Speedup MNIST preprocessing by a factor of 1000x
* make weight initialization optional to speed VGG construction. This makes loading pre-trained VGG models much faster
* Accelerate `transforms.adjust_gamma` by using PIL's point function instead of custom numpy-based implementation

New Datasets

* EMNIST - an extension of MNIST for hand-written letters
* OMNIGLOT - a dataset for one-shot learning, with 1623 different handwritten characters from 50 different alphabets
* Add a DatasetFolder class - generalization of ImageFolder

Miscellaneous improvements

* FakeData accepts a seed argument, so having multiple different FakeData instances is now possible
* Use consistent datatypes in Dataset targets. Now all datasets that returns labels will have them as int
* Add probability parameter in `RandomHorizontalFlip` and `RandomHorizontalFlip`
* Replace `np.random` by `random` in transforms - improves reproducibility in multi-threaded environments with default arguments
* Detect tif images in ImageFolder
* Add `pad_if_needed` to `RandomCrop`, so that if the crop size is larger than the image, the image is automatically padded
* Add support in `transforms.ToTensor` for PIL Images with mode '1'

Bugfixes

* Fix passing list of tensors to `utils.save_image`
* single images passed to `make_grid` now are now also normalized
* Fix PIL img close warnings
* Added missing weight initializations to densenet
* Avoid division by zero in `make_grid` when the image is constant
* Fix `ToTensor` when PIL Image has mode F
* Fix bug with `to_tensor` when the input is numpy array of type np.float32.

0.2.0

This version introduced a functional interface to the transforms, allowing for joint random transformation of inputs and targets. We also introduced a few breaking changes to some datasets and transforms (see below for more details).

Transforms
We have introduced a functional interface for the torchvision transforms, available under `torchvision.transforms.functional`. This now makes it possible to do joint random transformations on inputs and targets, which is especially useful in tasks like object detection, segmentation and super resolution. For example, you can now do the following:

python
from torchvision import transforms
import torchvision.transforms.functional as F
import random

def my_segmentation_transform(input, target):
i, j, h, w = transforms.RandomCrop.get_params(input, (100, 100))
input = F.crop(input, i, j, h, w)
target = F.crop(target, i, j, h, w)
if random.random() > 0.5:
input = F.hflip(input)
target = F.hflip(target)
F.to_tensor(input), F.to_tensor(target)
return input, target

The following transforms have also been added:
- [`F.vflip` and `RandomVerticalFlip`](http://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.RandomVerticalFlip)
- [FiveCrop](http://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.FiveCrop) and [TenCrop](http://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.TenCrop)
- Various color transformations:
- [`ColorJitter`](http://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.ColorJitter)
- `F.adjust_brightness`
- `F.adjust_contrast`
- `F.adjust_saturation`
- `F.adjust_hue`
- `LinearTransformation` for applications such as whitening
- `Grayscale` and `RandomGrayscale`
- `Rotate` and `RandomRotation`
- `ToPILImage` now supports `RGBA` images
- `ToPILImage` now accepts a `mode` argument so you can specify which colorspace the image should be
- `RandomResizedCrop` now accepts `scale` and `ratio` ranges as input parameters

Documentation
Documentation is now auto generated and publishing to [pytorch.org](http://pytorch.org/docs/master/torchvision/index.html)

Datasets:
SEMEION Dataset of handwritten digits added
Phototour dataset patches computed via multi-scale Harris corners now available by setting `name` equal to `notredame_harris`, `yosemite_harris` or `liberty_harris` in the `Phototour` dataset

Bug fixes:
- Pre-trained densenet models is now CPU compatible 251

Breaking changes:
This version also introduced some breaking changes:
- The `SVHN` dataset has now been made consistent with other datasets by making the label for the digit 0 be 0, instead of 10 (as it was previously) (see 194 for more details)
- the `labels` for the unlabelled `STL10` dataset is now an array filled with `-1`
- the order of the input args to the deprecated `Scale` transform has changed from `(width, height)` to `(height, width)` to be consistent with other transforms

0.1.9

- Ability to switch image backends between PIL and accimage
- Added more tests
- Various bug fixes and doc improvements

Models

- Fix for inception v3 input transform bug https://github.com/pytorch/vision/pull/144
- Added pretrained VGG models with batch norm

Datasets

- Fix indexing bug in LSUN dataset (https://github.com/pytorch/vision/pull/177)
- enable `~` to be used in dataset paths
- `ImageFolder` now returns the same (sorted) file order on different machines (https://github.com/pytorch/vision/pull/193)

Transforms

- transforms.Scale now accepts a tuple as new size or single integer

Utils

- can now pass a pad value to make_grid and save_image

0.1.8

New Features
Models
- SqueezeNet 1.0 and 1.1 models added, along with pre-trained weights
- Add pre-trained weights for VGG models
- Fix location of dropout in VGG
- `torchvision.models` now expose `num_classes` as a constructor argument
- Add InceptionV3 model and pre-trained weights
- Add DenseNet models and pre-trained weights

Datasets

- Add STL10 dataset
- Add SVHN dataset
- Add PhotoTour dataset

Transforms and Utilities
- `transforms.Pad` now allows fill colors of either number tuples, or named colors like `"white"`
- add normalization options to `make_grid` and `save_image`
- `ToTensor` now supports more input types

Performance Improvements

Bug Fixes
- ToPILImage now supports a single image
- Python3 compatibility bug fixes
- `ToTensor` now copes with all PIL Image types, not just RGB images
- ImageFolder now only scans subdirectories.
- Having files like `.DS_Store` is now no longer a blocking hindrance
- Check for non-zero number of images in ImageFolder
- Subdirectories of classes have recursive scans for images
- LSUN test set loads now

0.1.7

A small release, just needed a version bump because of PyPI.

0.1.6

New Features
- Add `torchvision.models`: Definitions and pre-trained models for common vision models
- ResNet, AlexNet, VGG models added with downloadable pre-trained weights
- adding padding to RandomCrop. Also add `transforms.Pad`
- Add MNIST dataset

Performance Fixes
- Fixing performance of LSUN Dataset


Bug Fixes
- Some Python3 fixes
- Bug fixes in save_image, add single channel support

Page 22 of 23

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.