Lightly

Latest version: v1.5.4

Safety actively analyzes 629678 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 19 of 20

1.0.8

python
>>> batch size is 128
>>> (batch0, batch1), labels, filenames = next(iter(dataloader))
>>> batch0.shape
torch.Size([128, 3, 32, 32])
>>> batch1.shape
torch.Size([128, 3, 32, 32])
>>> number of features is 64
>>> y0, y1 = simclr(batch0, batch1)
>>> y0.shape
torch.Size([128, 64])
>>> y1.shape
torch.Size([128, 64])
>>> loss = ntx_ent_loss(y0, y1)

Documentation Updates
A tutorial about how to use the SimSiam model is added along with some minor changes and improvements.

Minor Changes
Private functions are hidden from autocompletion.

Models
- [SimSiam: Exploring Simple Siamese Representation Learning](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.0.7

Video Reader Backend

Torchvision Video Reader Compatability
If available, the video loader can use the torchvision video reader backend to load frames quicker.
The new sequential video loader also allows much faster iteration through frames when they are processed in order.

Continous Testing
Integration of continuous unit testing with a badge in the README.

New SimCLR Tutorial
SimCLR with only a few lines of code - tutorial on a clothing dataset.

Models
- MoCo: [Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- SimCLR: [A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.0.6

Custom Backbones Made Easy

Decoupling Self-Supervised Models from ResNet
The implementation of SimCLR and MoCo have been changed such that they can now be constructed from an arbitrary backbone network.
Furthermore, the backbone of the self-supervised models is now called `backbone` instead of `features`.

Big Documentation Update
The documentation has received a lot of love and improvements to make working with `lightly` easier.
There is also a new tutorial on [how to train MoCo on Cifar-10](https://docs.lightly.ai/tutorials/package/tutorial_moco_memory_bank.html).

Minor Changes
The `LightlyDataset` can now be passed a list of indices marking relevant samples. Non-relevant samples will be ignored during further processing of the data.

Models
- MoCo: [Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- SimCLR: [A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.0.5

Customizable Checkpoint Callbacks, Batch Shuffling and More

Fixed download speed for image datasets.
`lightly-magic` can now be used with `trainer.max_epochs=0`.
Fixed the pytorch-lightning warning: "Passing a ModelCheckpoint instance to Trainer(checkpoint_callbacks=...) is deprecated since v1.1 and will no longer be supported in v1.3."

Customizable Checkpoint Callback
Checkpoint callbacks are now customizable (even from the command-line):
bash
save the 5 best models
lightly-train input_dir='data/' checkpoint_callback.save_top_k=5

don't save the model of the last epoch
lightly-train input_dir='data/' checkpoint_callback.save_last=False

Batch Shuffling
Added [batch shuffling to MoCo](https://github.com/facebookresearch/moco/blob/master/moco/builder.py) and `SplitBatchNorm` to simulate multi-gpu behaviour.

Image Resizing
Images can be resized before uploading them to the web-app:
bash
no resizing (default)
lightly-upload input_dir='data/' dataset_id='XYZ' token='123' resize=-1

resize such that shortest edge of the image is 128
lightly-upload input_dir='data/' dataset_id='XYZ' token='123' resize=128

resize images to (128, 128)
lightly-upload input_dir='data/' dataset_id='XYZ' token='123' resize=[128,128]


Models
- MoCo: [Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- SimCLR: [A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.0.4

Video File Support and Minor Changes

Refactoring of `lightly.api.upload.py` and `lightly.api.utils.py`.
Moved the checkpoint loading from the CLI to lightly.models.simclr` and `lightly.models.moco` respectively.
Minor bug fixes.

Video File Support
Lightly can now directly work with video files! No need to extract the frames first. Check the [docs](https://docs.lightly.ai/tutorials/structure_your_input.html) to see how!

Models
- MoCo: [Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- SimCLR: [A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.0.3

Bug Fixes and On-Premise Documentation
Fixed the efficiency display during embedding.
Added `trainer.precision` key to config. Train at half-precision with:
bash
lightly-train input_dir=my/input/dir trainer.precision=16


New On-Premise Documentation
The documentation received a whole new part on how to use the Lightly on-premise docker solution.

Models
- MoCo: [Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- SimCLR: [A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

Page 19 of 20

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.