Coremltools

Latest version: v7.2

Safety actively analyzes 621751 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 7

7.2

* New Features
* Supports ExecuTorch 0.2 (see [ExecuTorch doc](https://pytorch.org/executorch/stable/build-run-coreml.html) for examples)
* Core ML Partitioner: If a PyTorch model is partially supported with Core ML, then Core ML partitioner can determine the supported part and have ExecuTorch delegate to Core ML.
* Core ML Quantizer: Quantize PyTorch models in Core ML favored scheme
* Enhancements
* Improved Model Conversion Speed
* Expanded Operation Translation Coverage
* add `torch.narrow`
* add `torch.adaptive_avg_pool1d` and `torch.adaptive_max_pool1d`
* add `torch.numpy_t` (i.e. the numpy-style transpose operator `.T`)
* enhance `torch.clamp_min` for integer data type
* enhance `torch.add` for complex data type
* enhance `tf.math.top_k` when `k` is variable

Thanks to our ExecuTorch partners and our open-source community: KrassCodes M-Quadra teelrabbit minimalic alealv ChinChangYang pcuenca

7.1

* **New Features**:
* Supports Torch 2.1
* Includes experimental support for `torch.export` API but limited to EDGE dialect.
* Example usage:

*
import torch
from torch.export import export
from executorch.exir import to_edge

import coremltools as ct

example_args = (torch.randn(*size), )
aten_dialect = export(AnyNNModule(), example_args)
edge_dialect = to_edge(aten_dialect).exported_program()
edge_dialect._dialect = "EDGE"

mlmodel = ct.convert(edge_dialect)



* **Enhancements**:
* API - `ct.utils.make_pipeline` - now allows specifying compute_units
* New optimization passes:
* Folds selective data movement ops like reshape, transpose into adjacent constant compressed weights
* Casts int32 → int16 dtype for all intermediate tensors when compute precision is set to fp16
* PyTorch op - multinomial - Adds lowering for it to CoreML
* Type related refinements on Pad and Gather/Gather-like ops
* **Bug Fixes**:
* Fixes coremltools build issue related to kmeans1d package
* Minor fixes in lowering of PyTorch ops: masked_fill & randint
* Various other bug fixes, enhancements, clean ups and optimizations.

7.0

* New submodule [`coremltools.optimize`](https://coremltools.readme.io/v7.0/docs/optimizing-models) for model quantization and compression
* `coremltools.optimize.coreml` for compressing coreml models, in a data free manner. `coremltools.compresstion_utils.*` APIs have been moved here
* `coremltools.optimize.torch` for compressing torch model with training data and fine-tuning. The fine tuned torch model can then be converted using `coremltools.convert`
* The default neural network backend is now `mlprogram` for iOS15/macOS12. Previously calling `coremltools.convert()` without providing the `convert_to` or the `minimum_deployment_target` arguments, used the lowest deployment target (iOS11/macOS10.13) and the `neuralnetwork` backend. Now the conversion process will default to iOS15/macOS12 and the `mlprogram` backend. You can change this behavior by providing a `minimum_deployment_target` or `convert_to` value.
* Python 3.11 support.
* Support for new PyTorch ops: `repeat_interleave`, `unflatten`, `col2im`, `view_as_real`, `rand`, `logical_not`, `fliplr`, `quantized_matmul`, `randn`, `randn_like`, `scaled_dot_product_attention`, `stft`, `tile`
* `pass_pipeline` parameter has been added to `coremltools.convert` to allow controls over which optimizations are performed.
* MLModel batch prediction support.
* Support for converting statically quantized PyTorch models.
* Prediction from compiled model (`.modelc` files). Get compiled model files from an `MLModel` instance. Python API to explicitly compile a model.
* Faster weight palletization for large tensors.
* New utility method for getting weight metadata: `coremltools.optimize.coreml.get_weights_metadata`. This information can be used to customize optimization across ops when using `coremltools.optimize.coreml` APIs.
* New and updated MIL ops for iOS17/macOS14/watchOS10/tvOS17
* `coremltools.compression_utils` is deprecated.
* Changes default I/O type for Neural Networks to FP16 for iOS16/macOS13 or later when `mlprogram` backend is used.
* Changes upper input range behavior when backend is `mlprogram`:
* If `RangeDim` is used and no upper-bound is set (with a positive number), an exception will be raised.
* If the user does not use the `inputs` parameter but there are undetermined dim in input shape (for example, TF with "None" in input placeholder), it will be sanitized to a finite number (default_size + 1) and raise a warning.
* Various other bug fixes, enhancements, clean ups and optimizations.

Special thanks to our external contributors for this release: fukatani , pcuenca , KWiecko , comeweber , sercand , mlaves, cclauss, smpanaro , nikalra, jszaday

7.0b2

* The default neural network backend is now `mlprogram` for iOS15/macOS12. Previously calling `coremltools.convert()` without providing the `convert_to` or the `minimum_deployment_target` arguments, used the lowest deployment target (iOS11/macOS10.13) and the `neuralnetwork` backend. Now the conversion process will default to iOS15/macOS12 and the `mlprogram` backend. You can change this behavior by providing a `minimum_deployment_target` or `convert_to` value.
* Changes default I/O type for Neural Networks to FP16 for iOS16/macOS13 or later when `mlprogram` backend is used.
* Changes upper input range behavior when backend is `mlprogram`:
* If `RangeDim` is used and no upper-bound is set (with a positive number), an exception will be raised.
* If the user does not use the `inputs` parameter but there are undetermined dim in input shape (for example, TF with "None" in input placeholder), it will be sanitized to a finite number (default_size + 1) and raise a warning.
* New utility method for getting weight metadata: `coremltools.optimize.coreml.get_weights_metadata`. This information can be used to customize optimization across ops when using `coremltools.optimize.coreml` APIs.
* Support for new PyTorch ops: `repeat_interleave` and `unflatten`.
* New and updated iOS17/macOS14 ops: `batch_norm`, `conv`, `con`v`_transpose`, `expand_dims`, `gru`, `instance_norm`, `inverse`, `l2_norm`, `layer_norm`, `linear`, `local_response_norm`, `log`, `lstm`, `matmul`, `reshape_like`, `resample`, `resize`, `reverse`, `reverse_sequence`, `rnn`, `rsqrt`, `slice_by_index`, `slice_by_size`, `sliding_windows`, `squeeze`, `transpose`.
* Various other bug fixes, enhancements, clean ups and optimizations.


Special thanks to our external contributors for this release: fukatani, pcuenca, KWiecko, comeweber and sercand

7.0b1

* New submodule [`coremltools.optimize`](https://coremltools.readme.io/v7.0/docs/optimizing-models) for model quantization and compression
* `coremltools.optimize.coreml` for compressing coreml models, in a data free manner. `coremltools.compresstion_utils.*` APIs have been moved here
* `coremltools.optimize.torch` for compressing torch model with training data and fine-tuning. The fine tuned torch model can then be converted using `coremltools.convert`
* Updated MIL ops for iOS17/macOS14/watchOS10/tvOS17
* `pass_pipeline` parameter has been added to `coremltools.convert` to allow controls over which optimizations are performed.
* Python 3.11 support.
* MLModel batch prediction support.
* Support for converting statically quantized PyTorch models
* New Torch layer support: `randn`, `randn_like`, `scaled_dot_product_attention`, `stft`, `tile`
* Faster weight palletization for large tensors.
* `coremltools.models.ml_program.compression_utils` is deprecated.
* Various other bug fixes, enhancements, clean ups and optimizations.

Core ML tools 7.0 guide: https://coremltools.readme.io/v7.0/

Special thanks to our external contributors for this release: fukatani, pcuenca, mlaves, cclauss, smpanaro, nikalra, jszaday

6.3

Core ML Tools 6.3 Release Note

* Torch 2.0 Support
* TensorFlow 2.12.0 Support
* Remove Python 3.6 support
* Functionality for controling graph passes/optimizations, see the `pass_pipeline` parameter to `coremltools.convert`.
* A utility function for easily creating pipeline, see: `utils.make_pipeline`.
* A debug utility function for extracting submodels, see: `converters.mil.debugging_utils.extract_submodel`
* Various other bug fixes, enhancements, clean ups and optimizations.


Special thanks to our external contributors for this release: fukatani, nikalra and kevin-keraudren.

Page 1 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.