Coremltools

Latest version: v7.2

Safety actively analyzes 630130 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 7

5.1

* New supported PyTorch operations: `broadcast_tensors`, `frobenius_norm`, `full`, `norm` and `scatter_add`.
* Automatic support for inplace PyTorch operations if non-inplace operation is supported.
* Support PyTorch 1.9.1
* Various other bug fixes, optimizations and improvements.

5.0

What’s New

* Added a new kind of Core ML model type, called ML Program. TensorFlow and Pytorch models can now be converted to ML Programs.
* To learn about ML Programs, how they are different from the classicial Core ML neural network types, and what they offer, please see the documentation [here](https://coremltools.readme.io/v5.0/docs/ml-programs)
* Use the `convert_to` argument with the [unified converter API](https://coremltools.readme.io/v5.0/docs/unified-conversion-api) to indicate the model type of the Core ML model.
* `coremltools.convert(..., convert_to=“mlprogram”)` converts to a Core ML model of type ML program.
* `coremltools.convert(..., convert_to=“neuralnetwork”)` converts to a Core ML model of type neural network. “Neural network” is the older Core ML format and continues to be supported. Using just `coremltools.convert(...)` will default to produce a neural network Core ML model.
* When targeting ML program, there is an additional option available to set the compute precision of the Core ML model to either float 32 or float16. The default is float16. Usage example:
* `ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT32)` or `ct.convert(..., convert_to=“mlprogram”, compute_precision=ct.precision.FLOAT16)`
* To know more about how this affects the runtime, see the documentation on [Typed execution](https://coremltools.readme.io/v5.0/docs/typed-execution).
* You can save to the new [Model Package format](https://developer.apple.com/documentation/coreml/core_ml_api/updating_a_model_file_to_a_model_package) through the usual coremltool’s `save` method. Simply use `model.save("<model_name>.mlpackage")` instead of the usual `model.save(<"model_name>.mlmodel")`
* Core ML is introducing a new model format called model packages. It’s a container that stores each of a model’s components in its own file, separating out its architecture, weights, and metadata. By separating these components, model packages allow you to easily edit metadata and track changes with source control. They also compile more efficiently, and provide more flexibility for tools which read and write models.
* ML Programs can only be saved in the model package format.
* Adds the `compute_units` parameter to [MLModel](https://apple.github.io/coremltools/source/coremltools.models.html#module-coremltools.models.model) and [coremltools.convert](https://apple.github.io/coremltools/source/coremltools.converters.mil.html#module-coremltools.converters._converters_entry). This matches the `MLComputeUnits` in [Swift](https://developer.apple.com/documentation/coreml/mlcomputeunits) and [Objective-C](https://developer.apple.com/documentation/coreml/mlcomputeunits?language=objc). Use this parameter to specify where your models can run:
* `ALL` - use all compute units available, including the neural engine.
* `CPU_ONLY` - limit the model to only use the CPU.
* `CPU_AND_GPU` - use both the CPU and GPU, but not the neural engine.
* Python 3.9 Support
* Native M1 support for Python 3.8 and 3.9
* Support for TensorFlow 2.5
* Support Torch 1.9.0
* New Torch ops: affine_grid_generator, einsum, expand, grid_sampler, GRU, linear, index_put maximum, minimum, SiLUs, sort, torch_tensor_assign, zeros_like.
* Added flag to skip loading a model during conversion. Useful when converting for new macOS on older macOS:
`ct.convert(....., skip_model_load=True)`
* Various bug fixes, optimizations and additional testing.



Deprecations and Removals

* Caffe converter has been removed. If you are still using the Caffe converter, please use coremltools 4.
* Keras.io and ONNX converters will be deprecated in coremltools 6. Users are recommended to transition to the TensorFlow/PyTorch conversion via the unified converter API.
* Methods, such as `convert_neural_network_weights_to_fp16()`, `convert_neural_network_spec_weights_to_fp16()` , that had been deprecated in coremltools 4, have been removed.
* The `useCPUOnly` parameter for [MLModel](https://apple.github.io/coremltools/source/coremltools.models.html#module-coremltools.models.model) and [MLModel.predict](https://apple.github.io/coremltools/source/coremltools.models.html#coremltools.models.model.MLModel.predict)has been deprecated. Instead, use the `compute_units` parameter for [MLModel](https://apple.github.io/coremltools/source/coremltools.models.html#module-coremltools.models.model) and [coremltools.convert](https://apple.github.io/coremltools/source/coremltools.converters.mil.html#module-coremltools.converters._converters_entry).

5.0b5

* Added support for pytorch conversion for tensor assignment statements: `torch_tensor_assign` op and `index_put_` op . Fixed bugs in translation of `expand` ops and `sort` ops.
* Model input/output name sanitization: input and output names for "neuralnetwork" backend are sanitized (updated to match regex [a-zA-Z_][a-zA-Z0-9_]*), similar to the "mlprogram" backend. So instead of producing input/output names such as "1" or "input/1", "var_1" or "input_1", names will be produced by the unified converter API.
* Fixed a bug preventing a Model Package from being saved more than once to the same path.
* Various bug fixes, optimizations and additional testing.

5.0b4

* Fixes Python 3.5 and 3.6 errors when importing some specific submodules.
* Fixes Python 3.9 import error for arm64. 1288

5.0b3

* Native M1 support for Python 3.8 and Python 3.9
* Adds the `compute_units` parameter to [MLModel](https://apple.github.io/coremltools/source/coremltools.models.html#module-coremltools.models.model) and [coremltools.convert](https://apple.github.io/coremltools/source/coremltools.converters.mil.html#module-coremltools.converters._converters_entry). Use this to specify where your models can run:
* `ALL` - use all compute units available, including the neural engine.
* `CPU_ONLY` - limit the model to only use the CPU.
* `CPU_AND_GPU` - use both the CPU and GPU, but not the neural engine.
* With the above change we are deprecating the `useCPUOnly` parameter for [MLModel](https://apple.github.io/coremltools/source/coremltools.models.html#module-coremltools.models.model) and [coremltools.convert](https://apple.github.io/coremltools/source/coremltools.converters.mil.html#module-coremltools.converters._converters_entry).
* For ML programs the default compute precision has changed from Float 32 to Float 16. This can be overridden with the `compute_precision` parameter of `coremltools.convert`.
* Support for TensorFlow 2.5
* Removed scipy dependency
* Various bug fixes and optimizations

5.0b2

* Python 3.9 support
* Ubuntu 18 support
* Torch 1.9.0 support
* Added flag to skip loading a model during conversion. Useful when converting for new macOS on older macOS.
* New torch ops: affine_grid_generator, grid_sampler, linear, maximum, minimum, SiLUs
* Fuse Activation SiLUs optimization
* Add no-op transpose into noop_elimination
* Various bug fixes and other improvements, including:
* bug fix in `coremltools.utils.rename_feature` utility for ML Program spec
* bug fix in classifier model conversion for ML Program target

Page 3 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.