Lightly

Latest version: v1.5.4

Safety actively analyzes 629723 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 16 of 20

1.1.17

Active Learning Score Upload

Active Learning Score Upload
The lightly `ActiveLearningAgent` now supports an easy way to upload active learning scores to the [Lightly Web-app](https://app.lightly.ai).

Register Datasets before Upload
The refactored dataset upload now registers a dataset in the web-app before uploading the samples. This makes the upload more efficient and stable. Additionally, the progress of the upload can now be observed in the [Lightly Web-app](https://app.lightly.ai).

Documentation Updates
The [lightly on-premise documentation](https://docs.lightly.ai/docker/overview.html) was updated.

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.16

Improved Tutorial and Bug Fix in Masked Select

Improved Tutorial
The "Sunflowers" Tutorial has been overhauled and provides a great starting point for anyone trying to clean up their data.

Bug Fix in Masked Select
Major bug fix which solves confusion about little and big endian representation of the bit masks used for active learning.

Updated Requirements
`lightly` now requires the latest minor version (`0.0.*`) of the `lightly-utils` package instead of a fixed version. This allows quicker bug fixes and updates from the maintainers.

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.15

Resume Upload and Minor Updates

Resume Upload
The upload of a dataset can now be resumed if interrupted, as the `lightly-upload` and `lightly-magic` commands will skip files which are already on the platform.

Minor Updates
Filenames of images which are uploaded to the platform can now be up to 255 characters long.
Lightly can now be [cited](https://github.com/lightly-ai/lightly#bibtex) :)

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.14

Lightly-Crop Command, Much Faster Upload, Faster Ntxent Loss

The `lightly-crop` CLI command crops objects out of the input images based on labels and copies them into an output folder. This is very useful for doing SSL on an object-level instead of an image level. For more information, look at the documentation at https://docs.lightly.ai/getting_started/command_line_tool.html#crop-images-using-labels-or-predictions

We made the upload to the Lightly platform via `lightly-upload`or `lightly-magic`much faster. It should be at least 2 times faster (for smaller images) and even faster for large and compressed images like large jpegs.

The ntxent loss has a higher performance by optimising transfer between CPU and GPU

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.13

More CLI parameters, Bugfixes, Documentation

This release adds the new CLI parameter `trainer.weights_summary` allowing you to set the respective parameter of the pytorch lightning trainer for controlling how much information about your embedding model should be printed.

It also includes some bugfixes and documentation improvements.

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.12

New ImageNette Benchmarks and Faster Dataset Indexing

This release contains smaller fixes on the data processing side:

- Dataset indexing is now up to twice as fast when working with larger datasets
- By default, we don't use `0` workers anymore. The default argument of `-1` automatically detects the number of available cores and uses them. This can speed up the loading of data as well as the uploading of data to the Lighlty Platform.

New ImageNette Benchmarks
We added new benchmarks for the ImageNette dataset.

| Model | Epochs | Batch Size | Test Accuracy |
|-------------|--------|------------|---------------|
| MoCo | 800 | 256 | 0.827 |
| SimCLR | 800 | 256 | 0.847 |
| SimSiam | 800 | 256 | 0.827 |
| BarlowTwins | 800 | 256 | 0.801 |
| BYOL | 800 | 256 | 0.851 |


Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

Page 16 of 20

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.