Metapredict

Latest version: v2.63

Safety actively analyzes 621876 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 4

2.63

Changes:

* Changed max version of Python compatibility to 3.11 because of issues in 3.12

* Changed line 207 in /backend/meta_predict_disorder.py from *output_values.append(round(float(i), 4))* to *output_values.append(round(float(i[0]), 4))* to git rid of Numpy deprecation warning when using Numpy 1.25 or later. Adding to change log in case this causes mayhem, but is passing all tests and additional local tests at this time so should be OK.

2.62

Changes:

* Made sure predictions used `torch.no_grad()`

* Removed torch dependency on < 2.0.

* Bug fix: When running `metapredict-predict-idrs` using `--mode shephard-domains` and `--mode shephard-domains-uniprot` in the 2.6 update, we accidentally return the start index of each IDR as the 0th indexed position, but SHEPHARD uses 1-index inclusive indexing. This has now been fixed.

* Bug fix: We discovered that when using torch 1.13.0 pack-n-pad (which is not the default and must be specifically requested) can lead to some small numerical inaccuracies in disorder predictions (<0.01). The reason this can be problematic is that it MAY alter the boundary between a disordered and folded domain when compared to the single iterative analysis. This issue is fixed in torch 2.0.1 and as such we now internally require torch 2.0.1 if pack-n-pad is going to be used, otherwise we fall back to size-collect. Note that this approach avoids a hard dependency on torch 2.0.1. To be clear, using pack-n-pad is currently impossible from the command line and would require someone to explicitly have requested it from the API.

2.61

Changes:


* Renamed batch algorithms from mode 1 and mode 2 to size-collect and pack-n-pad.

* Default batch mode is always `size-collect` which in most cases is faster and is always available. You can force `pack-n-pad`

2.6

Changes:

* V2.6 represents an update of metapredict to a version we refer to as metapredict V2-FF. V2-F22 provides a dramatic improvement in prediction performance when `batch_mode()` is used. On CPUs, this provides a 5-20x improvement in performance. On GPUs, this enables proteome-wide prediction in seconds.

* Removed explicit multicore support and replaced with implicit parallelization in via `batch_predict()`.

* `batch_predict()` is now called in non-legacy predict for `predict_disorder_fasta()`, and can also be called via a `predict_disorder_batch()` which can take either a list or dictionary of sequences.

* From command-line tools, `metapredict-predict-idrs`, and `metapredict-predict-disorder` will also use batch mode if legacy=False (default), which as well as being much faster now provide a status bar.

* Note that this update adds `tqdm` as a dependency for metapredict

2.5

Changes:

* Added the first multicore support to metapredict. Currently limited to metapredict-predict-disorder functionality.

2.4.3

Changes:

* Updated the default names for `metapredict-predict-idrs` so that the FASTA output file is now called `idrs.fasta` instead of the inappropriate `shephard_idrs.tsv`.
* Added link to the new batch-mode Google Colab notebook!

Page 1 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.