Lightly

Latest version: v1.5.4

Safety actively analyzes 629723 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 17 of 20

1.1.11

Nearest Neighbour Contrastive Learning of Representations (NNCLR)

New NNCLR model
NNCLR[0] is basically SimCLR, but replaces samples by their nearest neighbours as an additional "augmentation" step.
As part of it, a Nearest Neighbour Memory Bank Module was implemented, which could also be used for other models.

[0] [With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations](https://arxiv.org/abs/2104.14548v1)

python
resnet = torchvision.models.resnet18()
backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1),
)

NNCLR
model = NNCLR(backbone)
criterion = NTXentLoss()

Prefer SimSiam with nearest neighbour?
model = SimSiam(backbone)
criterion = SymNegCosineSimilarityLoss()

Prefer BYOL with nearest neighbour?
model = BYOL(backbone)
criterion = SymNegCosineSimilarityLoss()

nn_replacer = NNMemoryBankModule(size=2 ** 16)

forward pass
(z0, p0), (z1, p1) = model(x0, x1)
z0 = nn_replacer(z0.detach(), update=False)
z1 = nn_replacer(z1.detach(), update=True)
loss = 0.5 * (criterion(z0, p1) + criterion(z1, p0))


Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.10

Documentation updates and Miscellaneous

Documentation Updates
- Added two new tutorials to the docs.

Miscellaneous
- In the warning if a newer lightly version is available, the current version is also shown.

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.1.9

Additional Support for Videos, Minor Bug Fixes, and Documentation Updates

Additional Video Formats
The `LightlyDataset` now works with `.mpg`, `.hevc`, `.m4v`, `.webm`, and `.mpeg` videos.

Bug Fixes
- Replaced the `squeeze` operation with `flatten` in the model forward passes. Thanks guarin for noticing and for the fix!
- Made `lightly` compatible with `pytorch-lightning>=1.3.0`.

Documentation Updates

- The `lightly-magic` command is finally featured in the docs. Thanks pranavsinghps1!
- Big update on the docker docs.

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.1.8

BYOL model, Refactoring and New Tutorial for Active Learning

New Model: BYOL

- This release adds a new model for self-supervised learning: BYOL (see https://arxiv.org/abs/2006.07733)
- Thanks pranavsinghps1 for your contribution!

Improvements

- Refactored NTXent Loss. The new code is shorter and easier to understand.
- Added a scorer for semantic segmentation to do active learning with image segmentation
- Added color highlighting in CLI
- CLI returns now the `dataset_id` when creating a new dataset

New Active Learning Turorial using Detectron2

- This tutorial shows the full power of the lightly self-supervised embedding and active learning scorers
- Check it out here: https://docs.lightly.ai/tutorials/platform/tutorial_active_learning_detectron2.html

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.1.7

Active Learning Refactoring and Minor Improvements

Instantiate shuffle tensor directly on device
This change makes our momentum encoders more efficient by directly instantiating temporary tensors on device instead of moving them there after instantiation. Thanks a lot to guarin for pointing out the problem and swiftly fixing it!

Active Learning Refactoring
The new strategy of uploading active learning scores to a query tag instead of the preselected tag is enforced making our framework more flexible, easier to use, and allowing users to make several samplings with the same set of scores at the cost of little computational overhead.
Additionally, active learning scores were renamed to match the current literature. We now support uncertainty sampling with the least confidence, margin and entropy variant as described in http://burrsettles.com/pub/settles.activelearning.pdf, page 12f, chapter 3.1.

Minor Bug Fixes and Improvements
Better handling of edge cases when doing active learning for object detection.

Models

- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.1.6

More Powerful CLI Commands, Stability Improvements and Updated Documentation
Create a new dataset directly when running`lightly-upload` and `lightly-magic`
Just replace the argument `dataset_id="your_dataset_id"` with the argument `new_dataset_name="your_dataset_name"`. To learn more, look at the docs,
Get only the newly added samples from a tag
`lightly-download` has the flag `exclude_parent_tag`
If this flag is set, the samples in the parent tag are excluded from being downloaded. This is very practical when doing active learning and you only want the filenames newly added to the tag.
`ActiveLearningAgent` has new attribute `added_set`
If you prefer getting the newly added samples from the active learning agent, just access its new attribute `added_set`

Minor Updates and Fixes
Updated documentation and docstrings to make working with lightly simpler.
Minor bug fixes and improvements.

Models

- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

Page 17 of 20

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.