v0.9.87 comes with some major changes that may cause your existing code to break.
**BREAKING CHANGES**
Losses
- The avg_non_zero_only init argument has been removed from ContrastiveLoss, TripletMarginLoss, and SignalToNoiseRatioContrastiveLoss. Here's how to translate from old to new code:
- avg_non_zero_only=True: Just remove this input parameter. Nothing else needs to be done as this is the default behavior.
- avg_non_zero_only=False: Remove this input parameter and replace it with reducer=reducers.MeanReducer(). You'll need to add this to your imports: from pytorch_metric_learning import reducers
- learnable_param_names and num_class_per_param has been removed from BaseMetricLossFunction due to lack of use.
- MarginLoss is the only built-in loss function that is affected by this. Here's how to translate from old to new code:
- learnable_param_names=["beta"]: Remove this input parameter and instead pass in learn_beta=True.
- num_class_per_param=N: Remove this input parameter and instead pass in num_classes=N.
AccuracyCalculator
- The average_per_class init argument is now avg_of_avgs. The new name better reflects the functionality.
- The old way to import was: from pytorch_metric_learning.utils import AccuracyCalculator. This will no longer work. The new way is: from pytorch_metric_learning.utils.accuracy_calculator import AccuracyCalculator. The reason for this change is to avoid an unnecessary import of the Faiss library, especially when this library is used in other packages.
**New feature: Reducers**
Reducers specify how to go from many loss values to a single loss value. For example, the ContrastiveLoss computes a loss for every positive and negative pair in a batch. A reducer will take all these per-pair losses, and reduce them to a single value. Here's where reducers fit in this library's flow of filters and computations:
Your Data --> Sampler --> Miner --> Loss --> Reducer --> Final loss value
Reducers are passed into loss functions like this:
python
from pytorch_metric_learning import losses, reducers
reducer = reducers.SomeReducer()
loss_func = losses.SomeLoss(reducer=reducer)
loss = loss_func(embeddings, labels) in your training for-loop
Internally, the loss function creates a dictionary that contains the losses and other information. The reducer takes this dictionary, performs the reduction, and returns a single value on which .backward() can be called. Most reducers are written such that they can be passed into any loss function.
See [the documentation](https://kevinmusgrave.github.io/pytorch-metric-learning/reducers/) for details.
**Other updates**
Utils
Inference
- InferenceModel has been added to the library. It is a model wrapper that makes it convenient to find matching pairs within a batch, or from a set of pairs. Take a look at [this notebook](https://colab.research.google.com/github/KevinMusgrave/pytorch-metric-learning/blob/master/examples/notebooks/Inference.ipynb) to see example usage.
AccuracyCalculator
- The k value for k-nearest neighbors can optionally be specified as an init argument.
- k-nn based metrics now receive knn distances in their kwargs. See 118 by marijnl
Other stuff
Unit tests were added for almost all losses, miners, regularizers, and reducers.
**Bug fixes**
Trainers
- Fixed a labels related bug in TwoStreamMetricLoss. See 112 by marijnl
Loss and miner utils
- Fixed bug where convert_to_triplets could encounter a RuntimeError. See 95