Lightly

Latest version: v1.5.4

Safety actively analyzes 629723 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 20

1.4.1

Changes
- New FastSiam model: [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU](https://link.springer.com/chapter/10.1007/978-3-031-16788-1_4)
- Add helper to list registered Lightly Workers

Models
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [DCL: Decoupled Contrastive Learning, 2021](https://arxiv.org/abs/2110.06848)
- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https://arxiv.org/abs/2104.14294)
- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https://link.springer.com/chapter/10.1007/978-3-031-16788-1_4)
- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https://arxiv.org/abs/2111.06377)
- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https://arxiv.org/abs/2204.07141)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [PMSN: Prior Matching for Siamese Networks, 2022](https://arxiv.org/abs/2210.07277)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https://arxiv.org/abs/2111.09886)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https://arxiv.org/pdf/2207.06167.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)
- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https://arxiv.org/pdf/2206.10698.pdf)
- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https://arxiv.org/abs/2105.04906)
- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https://arxiv.org/abs/2210.01571)

1.4.0

This release includes some breaking changes for users of [Lightly Worker](https://docs.lightly.ai/docs/install-lightly).

Breaking Changes
- Jobs are now scheduled with config v3 for Lightly Worker 2.6 (breaking).
- Remove `object_level` config option (breaking).

Changes
- Automate release using Github actions
- Split ApiWorkflowClient download and export functionality
- Preparation for instance segmentation support


Models
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [DCL: Decoupled Contrastive Learning, 2021](https://arxiv.org/abs/2110.06848)
- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https://arxiv.org/abs/2104.14294)
- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https://arxiv.org/abs/2111.06377)
- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https://arxiv.org/abs/2204.07141)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [PMSN: Prior Matching for Siamese Networks, 2022](https://arxiv.org/abs/2210.07277)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https://arxiv.org/abs/2111.09886)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https://arxiv.org/pdf/2207.06167.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)
- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https://arxiv.org/pdf/2206.10698.pdf)
- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https://arxiv.org/abs/2105.04906)
- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https://arxiv.org/abs/2210.01571)

1.3.3

Changes

- New PMSN model: [Prior Matching for Siamese Networks, 2022](https://arxiv.org/abs/2210.07277).
- Add deprecation warning for active learning workflow.
- Add deprecation warning for collate functions.
- Remove deprecated documentation.
- Refactor use of transforms.


Models
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [DCL: Decoupled Contrastive Learning, 2021](https://arxiv.org/abs/2110.06848)
- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https://arxiv.org/abs/2104.14294)
- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https://arxiv.org/abs/2111.06377)
- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https://arxiv.org/abs/2204.07141)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [PMSN: Prior Matching for Siamese Networks, 2022](https://arxiv.org/abs/2210.07277)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https://arxiv.org/abs/2111.09886)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https://arxiv.org/pdf/2207.06167.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)
- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https://arxiv.org/pdf/2206.10698.pdf)
- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https://arxiv.org/abs/2105.04906)
- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https://arxiv.org/abs/2210.01571)

1.3.2

Pytorch Lighting 2.0 Compatibility

Pytorch Lightning introduced breaking changes in the ways devices and accelerators are specified. We updated the code and example models to reflect that. For details, see the [PR](https://github.com/lightly-ai/lightly/commit/dce4cc0c18c4db5cda5fa9b3cd8af8dbd9d08df3).

Benchmarks now use transforms
The benchmarks now do the augmentations (e.g. colour jitter) in the dataset transform instead of the collate function of the dataloader and have been updated. For details, see the [PR](https://github.com/lightly-ai/lightly/pull/1076)

Other changes.
- The `ApiWorkflowClient` is now pickable, improving multithreading capabilities.
- The LabelBox export now supports LabelBox format v4.
- Smaller fixes for a better user experience.

Models
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [DCL: Decoupled Contrastive Learning, 2021](https://arxiv.org/abs/2110.06848)
- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https://arxiv.org/abs/2104.14294)
- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https://arxiv.org/abs/2111.06377)
- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https://arxiv.org/abs/2204.07141)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https://arxiv.org/abs/2111.09886)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https://arxiv.org/pdf/2207.06167.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)
- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https://arxiv.org/pdf/2206.10698.pdf)
- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https://arxiv.org/abs/2105.04906)
- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https://arxiv.org/abs/2210.01571)

1.3.1

Changes
* Raise error when trying to schedule a Lightly Worker job with unknown configuration arguments.
* Add `get_compute_worker_run_checkpoint_url` method to `ApiWorkflowClient`, allowing to access a pretrained checkpoint from the Lightly Worker by URL.
* Fix error in Lightly version check on Windows.
* Remove deprecated PytorchLightning `progress_bar_refresh_rate` trainer argument in tutorials.
* Make Masked Autoencoder work with half-precision training.

Other
* The lightly package is now formatted using the [black](https://github.com/psf/black) code formatter and [isort](https://github.com/PyCQA/isort).

Models
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [DCL: Decoupled Contrastive Learning, 2021](https://arxiv.org/abs/2110.06848)
- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https://arxiv.org/abs/2104.14294)
- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https://arxiv.org/abs/2111.06377)
- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https://arxiv.org/abs/2204.07141)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https://arxiv.org/abs/2111.09886)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https://arxiv.org/pdf/2207.06167.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)
- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https://arxiv.org/pdf/2206.10698.pdf)
- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https://arxiv.org/abs/2105.04906)
- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https://arxiv.org/abs/2210.01571)

1.3.0

This release includes some breaking changes, especially for our users of the the [Lightly Worker](https://docs.lightly.ai/docs/install-lightly)
Please follow the [migration guide](https://docs.lightly.ai/docs/migrating-to-worker-v25) to see version compatibility.

Breaking Changes
* Make lightly.api independent of torch(vision) (breaking).
* Validate the config created by the api (breaking).
* Fix scheduling jobs with config v2 (breaking).

Changes
* Add benchmarks results with new GaussianBlur implementation.
* Add transforms for all SSL models.
* Remove extra pooling layers from benchmarks.

Other
* Set creator on various endpoints in pip.

Models
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [DCL: Decoupled Contrastive Learning, 2021](https://arxiv.org/abs/2110.06848)
- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https://arxiv.org/abs/2104.14294)
- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https://arxiv.org/abs/2111.06377)
- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https://arxiv.org/abs/2204.07141)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https://arxiv.org/abs/2111.09886)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https://arxiv.org/pdf/2207.06167.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)
- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https://arxiv.org/pdf/2206.10698.pdf)
- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https://arxiv.org/abs/2105.04906)
- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https://arxiv.org/abs/2210.01571)

Page 6 of 20

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.