Torch-vision

Latest version: v0.1.6.dev0

Safety actively analyzes 622700 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.1.8

New Features
Models
- SqueezeNet 1.0 and 1.1 models added, along with pre-trained weights
- Add pre-trained weights for VGG models
- Fix location of dropout in VGG
- `torchvision.models` now expose `num_classes` as a constructor argument
- Add InceptionV3 model and pre-trained weights
- Add DenseNet models and pre-trained weights

Datasets

- Add STL10 dataset
- Add SVHN dataset
- Add PhotoTour dataset

Transforms and Utilities
- `transforms.Pad` now allows fill colors of either number tuples, or named colors like `"white"`
- add normalization options to `make_grid` and `save_image`
- `ToTensor` now supports more input types

Performance Improvements

Bug Fixes
- ToPILImage now supports a single image
- Python3 compatibility bug fixes
- `ToTensor` now copes with all PIL Image types, not just RGB images
- ImageFolder now only scans subdirectories.
- Having files like `.DS_Store` is now no longer a blocking hindrance
- Check for non-zero number of images in ImageFolder
- Subdirectories of classes have recursive scans for images
- LSUN test set loads now

0.1.7

A small release, just needed a version bump because of PyPI.

0.1.6

New Features
- Add `torchvision.models`: Definitions and pre-trained models for common vision models
- ResNet, AlexNet, VGG models added with downloadable pre-trained weights
- adding padding to RandomCrop. Also add `transforms.Pad`
- Add MNIST dataset

Performance Fixes
- Fixing performance of LSUN Dataset


Bug Fixes
- Some Python3 fixes
- Bug fixes in save_image, add single channel support

0.1.5

Introduced Datasets and Transforms.

Added common datasets

- COCO (Captioning and Detection)
- LSUN Classification
- ImageFolder
- Imagenet-12
- CIFAR10 and CIFAR100

- Added utilities for saving images from Tensors.

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.