Pytorch-metric-learning

Latest version: v2.5.0

Safety actively analyzes 630523 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 8 of 9

0.9.88

Bug fix
Removed the circular import which caused an ImportError when the reducers module was imported before anything else. See 125

0.9.87

v0.9.87 comes with some major changes that may cause your existing code to break.

**BREAKING CHANGES**
Losses
- The avg_non_zero_only init argument has been removed from ContrastiveLoss, TripletMarginLoss, and SignalToNoiseRatioContrastiveLoss. Here's how to translate from old to new code:
- avg_non_zero_only=True: Just remove this input parameter. Nothing else needs to be done as this is the default behavior.
- avg_non_zero_only=False: Remove this input parameter and replace it with reducer=reducers.MeanReducer(). You'll need to add this to your imports: from pytorch_metric_learning import reducers
- learnable_param_names and num_class_per_param has been removed from BaseMetricLossFunction due to lack of use.
- MarginLoss is the only built-in loss function that is affected by this. Here's how to translate from old to new code:
- learnable_param_names=["beta"]: Remove this input parameter and instead pass in learn_beta=True.
- num_class_per_param=N: Remove this input parameter and instead pass in num_classes=N.

AccuracyCalculator
- The average_per_class init argument is now avg_of_avgs. The new name better reflects the functionality.
- The old way to import was: from pytorch_metric_learning.utils import AccuracyCalculator. This will no longer work. The new way is: from pytorch_metric_learning.utils.accuracy_calculator import AccuracyCalculator. The reason for this change is to avoid an unnecessary import of the Faiss library, especially when this library is used in other packages.


**New feature: Reducers**
Reducers specify how to go from many loss values to a single loss value. For example, the ContrastiveLoss computes a loss for every positive and negative pair in a batch. A reducer will take all these per-pair losses, and reduce them to a single value. Here's where reducers fit in this library's flow of filters and computations:

Your Data --> Sampler --> Miner --> Loss --> Reducer --> Final loss value

Reducers are passed into loss functions like this:
python
from pytorch_metric_learning import losses, reducers
reducer = reducers.SomeReducer()
loss_func = losses.SomeLoss(reducer=reducer)
loss = loss_func(embeddings, labels) in your training for-loop

Internally, the loss function creates a dictionary that contains the losses and other information. The reducer takes this dictionary, performs the reduction, and returns a single value on which .backward() can be called. Most reducers are written such that they can be passed into any loss function.

See [the documentation](https://kevinmusgrave.github.io/pytorch-metric-learning/reducers/) for details.


**Other updates**
Utils
Inference
- InferenceModel has been added to the library. It is a model wrapper that makes it convenient to find matching pairs within a batch, or from a set of pairs. Take a look at [this notebook](https://colab.research.google.com/github/KevinMusgrave/pytorch-metric-learning/blob/master/examples/notebooks/Inference.ipynb) to see example usage.

AccuracyCalculator
- The k value for k-nearest neighbors can optionally be specified as an init argument.
- k-nn based metrics now receive knn distances in their kwargs. See 118 by marijnl

Other stuff
Unit tests were added for almost all losses, miners, regularizers, and reducers.

**Bug fixes**
Trainers
- Fixed a labels related bug in TwoStreamMetricLoss. See 112 by marijnl

Loss and miner utils
- Fixed bug where convert_to_triplets could encounter a RuntimeError. See 95

0.9.86

**Losses + miners**
- Added assertions to make sure the number of input embeddings is equal to the number of input labels.
- MarginLoss
- Fixed bug where loss explodes if self.nu > 0 and number of active pairs is 0. See https://github.com/KevinMusgrave/pytorch-metric-learning/issues/98#issue-618347291


**Trainers**
- Added freeze_these to the init arguments of BaseTrainer. This optional argument takes a list or tuple of strings as input. The strings must correspond to the names of models or loss functions, and these models/losses will have their parameters frozen during training. Their corresponding optimizers will also not be stepped.
- Fixed indices shifting bug in the TwoStreamMetricLoss trainer. By marijnl

**Testers**
- BaseTester
- Pass in epoch to visualizer_hook
- Added eval option to get_all_embeddings. By default it is True, and will set the input trunk and embedder to eval() mode.

**Utils**
- HookContainer
- Allow training to resume from best model, rather than just the latest model.
- **The best models are now saved as <model_name>_best<epoch>.pth rather than <model_name>_best.pth.** To easily get the new suffix for loading the best model you can do:
python
from pytorch_metric_learning.utils import common_functions as c_f
_, best_model_suffix = c_f.latest_version(your_model_folder, best=True)
best_trunk = "trunk_{}.pth".format(best_model_suffix)
best_embedder = "embedder_{}.pth".format(best_model_suffix)

0.9.85

**Trainers**
- Added [TwoStreamMetricLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/trainers/#twostreammetricloss). By marijnl.
- All BaseTrainer child classes now accept *args and pass it to BaseTrainer, so that you can use positional arguments when you init those child classes, rather than just keyword arguments.
- Fixed a key verification bug in CascadedEmbeddings that made it impossible to pass in an optimizer for the metric loss.

**Testers**
- Added [GlobalTwoStreamEmbeddingSpaceTester](https://kevinmusgrave.github.io/pytorch-metric-learning/testers/#globaltwostreamembeddingspacetester). By marijnl
- BaseTester
- The input visualizer should now implement the fit_transform method, rather than fit and transform separately.
- Fixed various bugs related to label_hierarchy_level
- WithSameParentLabelTester
- Fixed bugs that were causing this tester to encounter a runtime error.

**Utils**
- HookContainer
- Added methods for retrieving loss and accuracy history.
- Fixed bug where the value for best_epoch could be None.
- AccuracyCalculator
- Got rid of bug that returned NaN when dealing with classes containing only one sample.
- Added average_per_class option, which computes the average accuracy per class, and then returns the average of those averages. This can be useful when evaluating datasets with unbalanced classes.

**Other stuff**
- Added the with-hooks and with-hooks-cpu pip install options. The following will install record-keeper, faiss-gpu, and tensorboard, in addition to pytorch-metric-learning

pip install pytorch-metric-learning[with-hooks]

If you don't have a GPU you can do:

pip install pytorch-metric-learning[with-hooks-cpu]

- Added more tests for AccuracyCalculator

0.9.84

**Testers**
- BaseTester
- Removed size_of_tsne and added visualizer and visualizer_hook to BaseTester. The visualizer needs to implement the fit and transform functions. (In the next version, I'll allow fit_transform as well.) For example:
python
UMAP is the dimensionality reducer we will pass in as the visualizer
import umap
import umap.plot
For plotting the embeddings
def visualizer_hook(umapper, umap_embeddings, labels, split_name, keyname):
logging.info("UMAP plot for the {} split and label set {}".format(split_name, keyname))
umap.plot.points(umapper, labels=labels, show_legend=False)
plt.show()

GlobalEmbeddingSpaceTester(visualizer=umap.UMAP(), visualizer_hook=visualizer_hook)


**Utils**
- AccuracyCalculator
- Added include to the init arguments.
- Renamed exclude_metrics to exclude.
- Added the requires_knn method.
- Added check_primary_metrics to AccuracyCalculator, which validates the metrics specified in include and exclude. By wconnell
- HookContainer
- Check if primary_metric is in tester.AccuracyCalculator. By wconnell
- logging_presets
- Added **kwargs to get_hook_container, so that, for example, you can do get_hook_container(record_keeper, primary_metric="AMI")

**Other stuff**
- Added an [example Google Colab notebook](https://colab.research.google.com/drive/1fwTC-GRW3X6QiJq6_abJ47On2f3s9e5e) which goes through the entire training/testing workflow.

0.9.83

**Losses**
- Added [CircleLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#circleloss), implemented by AlenUbuntu
- Changes to ProxyAnchorLoss:
- Fixed bug that caused it to break when normalize_embeddings=False
- Made it extend WeightRegularizerMixin
- Fixed/improved application of miner_weights in ProxyAnchorLoss, NCALoss, and FastAPLoss


**Utils**
- Added [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/utils/#accuracycalculator)
- Changes to loss_and_miner_utils
- Made convert_to_weights return values between 0 and 1, where 1 represents the most frequently occuring sample. Before, it was scaling the probability by size of batch.

**Other stuff**
- Added a test for convert_to_weights

Page 8 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.