Lightly

Latest version: v1.5.4

Safety actively analyzes 629723 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 18 of 20

1.1.5

Hypersphere Loss, Stability Improvements and Updated Documentation

Hypersphere Loss (EelcoHoogendoorn)
Implemented the loss function [described here](https://arxiv.org/abs/2005.10242), which achieves competitive results with more cited ones (symmetric negative cosine similarity & contrastive loss) while providing better interpretability.

You can use the loss in combination with all other losses supported by lightly:
python
initialize loss function
loss_fn = HypersphereLoss()

generate two random transforms of images
t0 = transforms(images)
t1 = transforms(images)

feed through (e.g. SimSiam) model
out0, out1 = model(t0, t1)

calculate loss
loss = loss_fn(out0, out1)


Thank you, EelcoHoogendoorn, for your contribution

Minor Updates and Fixes
Updated documentation and docstrings to make working with lightly simpler.
Minor bug fixes and improvements.

Models

- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.1.4

Consistency Regularization, CLI update, and API client update

Consistency Regularization
This release contains an implementation of the [CO2 (consistency contrast) regularization](https://arxiv.org/abs/2010.02217) which can be used together with our contrastive loss function. We observed consistent (although marginal) improvements when applying the regularizer to our models!

lightly-version
A new CLI command was added to enable users to easily check the installed version from the command line. This is especially useful when working with different environments and it's not clear which version of lightly is being used. The command is:

> lightly-version

1.1.3

New Augmentation (Solarization) and Updates to README and Docs

Solarization
Solarization is an augmentation which inverts all pixels above a given threshold. It is being applied in many papers about self-supervised learning. For example, in [BYOL](https://arxiv.org/abs/2006.07733) and [Barlow Twins](https://arxiv.org/abs/2103.03230).

Updates to README and Docs (multi GPU training)
The README received a code example to show how to use lightly. The documentation was polished and received a section about how to use lightly with multiple GPUs.

Experimental: Active Learning Scorers for Object Detection
Scorers for active learning with object detection were added. These scorers will not work with the API yet and are therefore also not yet documented.

Models

- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.1.2

Barlow Twins, a New Benchmarking Module and Updated Documentation

Barlow Twins (AdrianArnaiz)
An implementation of the Barlow Twins architecture and loss for self-supervised learning is added. The approach measures the cross-correlation matrix between the outputs of two identical networks and making it as similar to the unit matrix as possible.

Thank you AdrianArnaiz for your contribution

Benchmarking Module
A [benchmarking module](https://docs.lightly.ai/lightly.utils.html#module-lightly.utils.benchmarking) is added for simpler evaluation of models using kNN callback.

API Updates: Lightly Platform
You can now easily download your datasets from the [Lightly Platform](https://app.lightly.ai) using the CLI:
bash
lightly-download token=123 dataset_id=xyz output_dir=store/dataset/here
lightly-download token=123 dataset_id=xyz tag_name=my-tag output_dir=store/tag/here

Minor Updates and Fixes
Updated documentation and docstrings to make working with lightly simpler.
`transforms` can now be passed directly to the `LightlyDataset`. [Learn more here](https://docs.lightly.ai/lightly.data.html#module-lightly.data.dataset).
Minor bug fixes and improvements.

Models

- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.1.1

Fix Imports
Fixes a bug introduced in the last release.

1.1.0

Lightly gets support for Active-Learning
We're excited to offer our [new active-learning functionality](https://docs.lightly.ai/getting_started/active_learning.html)! Use the strong representations learned in a self-supervised fashion together with model predictions to further improve the data selection process.

This release introduces breaking changes with respect to the API calls.

Active-Learning
The self-supervised representations together with the model predictions provide a great basis for deciding which samples should be annotated and which ones are redundant.

This release brings a completely new interface with which you can add active-learning to your ML project with just a few lines of code.:

- ApiWorkflowClient: `lightly.api.api_workflow_client.ApiWorkflowClient`
The ApiWorkflowClient is used to connect to our API. The API handles the selection of the images based on embeddings and active- learning scores. To initialize the ApiWorkflowClient, you will need the datasetId and the token from the Lightly Platform.
- ActiveLearningAgent: `lightly.active_learning.agents.agent.ActiveLearningAgent`
The ActiveLearningAgent builds the client interface of our active-learning framework. It helps with indicating which images are preselected and which ones to sample from. Furthermore, one can query it to get a new batch of images. To initialize an ActiveLearningAgent you need an ApiWorkflowClient.
- SamplerConfig: `lightly.active_learning.config.sampler_config.SamplerConfig`
The SamplerConfig allows the configuration of a sampling request. In particular, you can set the number of samples, the name of the resulting selection, and the SamplingMethod. Currently, you can set the SamplingMethod to one of the following:
- Random: Selects samples uniformly at random.
- Coreset: Selects samples that are diverse.
- Coral: Combines Coreset with scores to do active-learning.
- Scorer: `lightly.active_learning.scorers.scorer.Scorer`
The Scorer takes as input the predictions of a pre-trained model on the set of unlabeled images. It evaluates different scores based on how certain the model is about the images and passes them to the API so the sampler can use them with Coral.

Check out our [documentation](https://docs.lightly.ai/index.html) to learn more!

API (breaking)
With the refactoring of our API, we are switching to using a generated Python client. This leads to clearer and unified endpoints, fewer errors, and better error messages. Unfortunately, this means that previous versions of the package are no longer compatible with our new API.
**Note that this only affects all API calls. Using the package for self-supervised learning is unaffected.**

Models
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

Page 18 of 20

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.