Onnxruntime

Latest version: v1.17.1

Safety actively analyzes 613734 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 8

1.17.0

Announcements
In the next release, we will totally drop support for Windows ARM32.

General
- Added support for new ONNX 1.15 opsets: [IsInf-20](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#IsInf-20), [IsNaN-20](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#IsNaN-20), [DFT-20](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#DFT-20), [ReduceMax-20](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#ReduceMax-20), [ReduceMin-20](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#reducemin-20), [AffineGrid-20](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#AffineGrid-20), [GridSample](https://github.com/onnx/onnx/blob/main/docs/Operators.md#GridSample), [ConstantOfShape-20](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#ConstantOfShape-20), [RegexFullMatch](https://github.com/onnx/onnx/blob/main/docs/Operators.md#RegexFullMatch), [StringConcat](https://github.com/onnx/onnx/blob/main/docs/Operators.md#StringConcat), [StringSplit](https://github.com/onnx/onnx/blob/main/docs/Operators.md#StringSplit), and [ai.onnx.ml.LabelEncoder-4](https://github.com/onnx/onnx/blob/main/docs/Changelog-ml.md#ai.onnx.ml.LabelEncoder-4).
- Updated C/C++ libraries: abseil, date, nsync, googletest, wil, mp11, cpuinfo, safeint, and onnx.

Build System and Packages
- Dropped CentOS 7 support. All Linux binaries now require glibc version >=2.28, but users can still build the source code for a lower glibc version.
- Added CUDA 12 packages for Python and Nuget.
- Added Python 3.12 packages for ONNX Runtime Inference. ONNX Runtime Training Python 3.12 packages cannot be provided at this time since training packages depend on PyTorch, which does not support Python 3.12 yet.
- Linux binaries (except those in AMD GPU packages) are built in a more secure way that is compliant with BinSkim's default policy (e.g., the binaries no longer have an executable stack).
- Added support for Windows ARM64X for users who build ONNX Runtime from source. No prebuilt package provided yet.
- Removed Windows ARM32 binaries from official packages. Users who still need these binaries can build them from source.
- Added AMD GPU package with ROCm and MiGraphX (Python + Linux only).
- Split ONNX Runtime GPU Nuget package into two packages.
- When building the source code for Linux ARM64 or Android, the C/C++ compiler must support BFloat16. Support for Android NDK 24.x has been removed. Please use NDK 25.x or 26.x instead.
- Link time code generation (LTCG or LTO) is now disabled by default when building from source. To re-enable it, users can add "--enable_lto" to the build command. All prebuilt binaries are still built with LTO.

Core
- Optimized graph inlining.
- Allow custom op to invoke internal thread-pool for parallelism.
- Added support for supplying a custom logger at the session level.
- Added new logging and tracing of session and execution provider options.
- Added new [dynamic ETW provider](https://onnxruntime.ai/docs/performance/tune-performance/logging_tracing.html#Tracing---Windows) that can trace/diagnose ONNX internals while maintaining great performance.

Performance
- Added 4bit quant support on NVIDIA GPU and ARM64.

EPs
TensorRT EP
- Added support for direct load of precompiled TensorRT engines and customizable engine prefix.
- Added Python support for TensorRT plugins via ORT custom ops.
- Fixed concurrent Session::Run bugs.
- Updated calls to deprecated TensorRT APIs (e.g., enqueue_v2 → enqueue_v3).
- Fixed various memory leak bugs.

QNN EP
- Added support for QNN SDK 2.18.
- Added context binary caching and model initialization optimizations.
- Added mixed precision (8/16 bit) quantization support.
- Add device-level session options (soc_model, htp_arch, device_id), extreme_power_saver for htp_performance_mode, and vtcm_mb settings.
- Fixed multi-threaded inference bug.
- Fixed various other bugs and added performance improvements.
- QNN [profiling](https://onnxruntime.ai/docs/execution-providers/QNN-ExecutionProvider.html#configuration-options) of the NPU can be enabled [dynamically with ETW](https://onnxruntime.ai/docs/performance/tune-performance/profiling-tools.html#Qualcomm-QNN-EP) or [write out to CSV](https://onnxruntime.ai/docs/performance/tune-performance/profiling-tools.html#Cross-Platform-CSV-Tracing).

OpenVINO EP
- Added support for OpenVINO 2023.2.
- Added AppendExecutionProvider_OpenVINO_V2 API for supporting new OpenVINO EP options.

DirectML EP
- Updated to [DirectML 1.13.1](https://github.com/microsoft/DirectML/blob/master/Releases.md).
- Updated operators LpPool-18 and AveragePool-19 with dilations.
- Improved Python I/O binding support.
- Added RotaryEmbedding.
- Added support for fusing subgraphs into DirectML execution plans.
- Added new Python API to choose a specific GPU on multi-GPU devices with the DirectML EP.

Mobile
- Added initial support for 4bit quantization on ARM64.
- Extended CoreML/NNAPI operator coverage.
- Added support for YOLOv8 pose detection pre/post processing.
- Added support for macOS in CocoaPods package.

Web
- Added support for external data format.
- Added support for I/O bindings.
- Added support for training.
- Added WebGPU optimizations.
- Transitioned WebGPU out of experimental.
- Added FP16 support for WebGPU.

Training

Large Model Training
- Enabled support for QLoRA (with support for BFloat16).
- Added symbolic shape support for Triton codegen (see [PR](https://github.com/microsoft/onnxruntime/pull/18317)).
- Made improvements to recompute optimizer with easy ON/OFF to allow layer-wise recompute (see [PR](https://github.com/microsoft/onnxruntime/pull/18566)).
- Enabled memory-efficient gradient management. For Mistral, we see ~10GB drop in memory consumption when this feature is ON (see [PR](https://github.com/microsoft/onnxruntime/pull/18924)).
- Enabled embedding sparsity optimizations.
- Added support for Aten efficient attention and Triton Flash Attention (see [PR](https://github.com/microsoft/onnxruntime/pull/17959)).
- Packages now available for CUDA 11.8 and 12.1.

On Device Training
- On-Device training will now support training on the web. This release focuses on federated learning and developer exploration scenarios. More features coming soon in future releases.

Extensions
- Modified gen_processing_model tokenizer model to output int64, unifying output datatype of all tokenizers.
- Implemented support for post-processing of YOLO v8 within the Python extensions package.
- Introduced 'fairseq' flag to enhance compatibility with certain Hugging Face tokenizers.
- Incorporated 'added_token' attribute into the BPE tokenizer to improve CodeGen tokenizer functionality.
- Enhanced the SentencePiece tokenizer by integrating token indices into the output.
- Added support for the custom operator implemented with CUDA kernels, including two example operators.
- Added more tests on the Hugging Face tokenizer and fixed identified bugs.

Known Issues
- The onnxruntime-training package is not yet available in PyPI but can be accessed in ADO as follows:

python -m pip install cerberus flatbuffers h5py numpy>=1.16.6 onnx packaging protobuf sympy setuptools>=41.4.0
pip install -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT/pypi/simple/ onnxruntime-training
pip install torch-ort
python -m torch_ort.configure

Installation instructions can also be accessed [here](https://onnxruntime.ai/getting-started).
- For models with int4 kernel only:
- Crash may occur when int4 is applied on Intel CPUs with hybrid core if the E-cores are disabled in BIOS. Fix is in progress to be patched.
- Performance regression on the int4 kernel on x64 makes the op following MatMulNBits much slower. Fix is in progress to be patched.
- Current bug in BeamSearch implementation of T5, GPT, and Whisper may break models under heavy inference load using BeamSearch on CUDA. See [19345](https://github.com/microsoft/onnxruntime/pull/19345). Fix is in progress to be patched.
- Full support of ONNX 1.15 opsets is still in progress. A list of new ONNX 1.15 opset support that has been included in this release can be found above in the 'General' section.
- Some Cast nodes will not be removed (see https://github.com/microsoft/onnxruntime/pull/17953): Cast node from higher precision to lower precision (like fp32 to fp16) will be kept. If model result is different from ORT 1.16 and 1.17, check whether some Cast nodes was removed in 1.16 but kept in 1.17.

Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
Changming Sun, Yulong Wang, Tianlei Wu, Yi Zhang, Jian Chen, Jiajia Qin, Adrian Lizarraga, Scott McKay, Wanming Lin, pengwa, Hector Li, Chi Lo, Dmitri Smirnov, Edward Chen, Xu Xing, satyajandhyala, Rachel Guo, PeixuanZuo, RandySheriffH, Xavier Dupré, Patrice Vignola, Baiju Meswani, Guenther Schmuelling, Jeff Bloomfield, Vincent Wang, cloudhan, zesongw, Arthur Islamov, Wei-Sheng Chin, Yifan Li, raoanag, Caroline Zhu, Sheil Kumar, Ashwini Khade, liqun Fu, xhcao, aciddelgado, kunal-vaishnavi, Aditya Goel, Hariharan Seshadri, Ye Wang, Adam Pocock, Chen Fu, Jambay Kinley, Kaz Nishimura, Maximilian Müller, Yang Gu, guyang3532, mindest, Abhishek Jindal, Justin Chu, Numfor Tiapo, Prathik Rao, Yufeng Li, cao lei, snadampal, sophies927, BoarQing, Bowen Bao, George Wu, Jiajie Hu, MistEO, Nat Kershaw (MSFT), Sumit Agarwal, Ted Themistokleous, ivberg, zhijiang, Christian Larson, Frank Dong, Jeff Daily, Nicolò Lucchesi, Pranav Sharma, Preetha Veeramalai, Cheng Tang, Xiang Zhang, junchao-loongson, petermcaughan, rui-ren, shaahji, simonjub, trajep, Adam Louly, Akshay Sonawane, Artem Shilkin, Atanas Dimitrov, AtanasDimitrovQC, BODAPATIMAHESH, Bart Verhagen, Ben Niu, Benedikt Hilmes
Brian Lambert, David Justice, Deoksang Kim, Ella Charlaix, Emmanuel Ferdman, Faith Xu, Frank Baele, George Nash, hans00, computerscienceiscool, Jake Mathern, James Baker, Jiangzhuo, Kevin Chen, Lennart Hannink, Lukas Berbuer, Mike Guo, Milos Puzovic, Mustafa Ateş Uzun, Peishen Yan, Ran Gal, Ryan Hill, Steven Roussey, Suryaprakash Shanmugam, Vadym Stupakov, Yiming Hu, Yueqing Zhang, Yvonne Chen, Zhang Lei, Zhipeng Han, aimilefth, gunandrose4u, kailums, kushalpatil07, kyoshisuki, luoyu-intel, moyo1997, tbqh, weischan-quic, wejoncy, winskuo-quic, wirthual, yuwenzho

1.16.3

What's Changed
1. Stable Diffusion XL demo update by tianleiwu in https://github.com/microsoft/onnxruntime/pull/18496
2. Fixed a memory leak issue(18466) in TensorRT EP by chilo-ms in https://github.com/microsoft/onnxruntime/pull/18467
3. Fix a use-after-free bug in SaveInputOutputNamesToNodeMapping function by snnn in https://github.com/microsoft/onnxruntime/pull/18456 . The issue was found by AddressSanitizer.

1.16.2

The patch release includes updates on:

* Performance optimizations for Llama2 on CUDA EP and DirectML EP
* Performance optimizations for Stable Diffusion XL model for CUDA EP
* [Demos](https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/models/stable_diffusion/README.md) for text to image generation
* Mobile bug fixes for crash on some older 64-bit ARM devices and AOT inlining issue on iOS with C bindings
* TensorRT EP bug fixes for user provided compute stream and stream synchronization

1.16.1

1.16

- Fix type of weights and activations in the ONNX quantizer
- Fix quantization bug in historic quantizer 17619
- Enable session option access in nodejs API
- Update nodejs to v18
- Align ONNX Runtime extensions inclusion in source and build
- Limit per thread context to 1 in the TensorRT EP to avoid error caused by input shape changes

1.16.0

General
* Support for serialization of models >=2GB

APIs
* New session option to disable default CPU EP fallback `session.disable_cpu_ep_fallback`
* Java
* Support for fp16 and bf16 tensors as inputs and outputs, along with utilities to convert between these and fp32 data. On JDK 20 and newer the fp16 conversion methods use the JDK's Float.float16ToFloat and Float.floatToFloat16 methods which can be hardware accelerated and vectorized on some platforms.
* Support for external initializers so that large models that can be instantiated without filesystem access
* C
* Expose OrtValue API as the new preferred API to run inference in C. This reduces garbage and exposes direct native memory access via Slice like interfaces.
* Make Float16 and BFloat16 full featured fp16 interfaces that support conversion and expose floating properties (e.g. IsNaN, IsInfinity, etc)
* C++
* Make Float16_t and BFloat16_t full featured fp16 interfaces that support conversion and expose floating properties (e.g. IsNaN, IsInfinity, etc)


Performance
* Improve LLM quantization accuracy with smoothquant
* Support 4-bit quantization on CPU
* Optimize BeamScore to improve BeamSearch performance
* Add FlashAttention v2 support for Attention, MultiHeadAttention and PackedMultiHeadAttention ops

Execution Providers
* CUDA EP
* Initial fp8 support (QDQ, Cast, MatMul)
* Relax CUDA Graph constraints to allow more models to utilize
* Allow CUDA allocator to be registered with ONNX Runtime externally
* Fixed a build issue with CUDA 12.2 (16713)
* TensorRT EP
* CUDA Graph support
* Support user provided cuda compute stream
* Misc bug fixes and improvements
* OpenVINO EP
* Support OpenVINO 2023.1
* QNN EP
* Enable context binary cache to reduce initialization time
* Support QNN 2.12
* Support for resize with asymmetric transformation mode on HTP backend
* Ops support: Equal, Less, LessOrEqual, Greater, GreaterOrEqual, LayerNorm, Asin, Sign, DepthToSpace, SpaceToDepth
* Support 1D Conv/ConvTranspose
* Misc bug fixes and improvements

Mobile
* Initial support for [Azure EP](https://onnxruntime.ai/docs/execution-providers/Azure-ExecutionProvider.html)
* Dynamic shape support for CoreML
* Improve React Native performance with JSI
* Mobile support for CLIPImageProcessor pre-processing and CLIP scenario
* Swift Package Manager support for ONNX Runtime inference and ONNX Runtime extensions via [onnxruntime-swift-package-manager](https://github.com/microsoft/onnxruntime-swift-package-manager)

Web
* webgpu ops coverage improvements (SAM, T5, Whisper)
* webnn ops coverage improvements (SAM, Stable Diffusion)
* Stability/usability improvements for webgpu

Large model training
* ORTModule + OpenAI Triton Integration now available. [See details here](https://github.com/microsoft/onnxruntime/blob/main/docs/ORTModule_Training_Guidelines.md#6-use-openai-triton-to-compute-onnx-sub-graph)
* [Label Sparsity compute optimization](https://github.com/microsoft/onnxruntime/blob/main/docs/ORTModule_Training_Guidelines.md#ortmodule_enable_compute_optimizer) support complete and enabled by default starting release 1.16
* **New experimental** embedding [sparsity related optimizations](https://github.com/microsoft/onnxruntime/blob/main/docs/ORTModule_Training_Guidelines.md#ortmodule_enable_embedding_sparse_optimizer) available (disabled by default).
* Improves training performance of Roberta in Transformers by 20-30%
* Other compute optimizations like Gather/Slice/Reshape upstream support enabled.
* Optimizations for [LLaMAv2 (~10% acceleration)](https://github.com/huggingface/optimum/tree/main/examples/onnxruntime/training/text-classification#text-classification) and OpenAI Whisper
* Improvements to logging and metrics (initialization overhead, memory usage, statistics convergence tool, etc) system improvements.
* PythonOp enhancement: bool and tuple[bool] constants, materialize grads, empty inputs, save in context, customized shape inference, use full qualified name for export.
* SCELossInternal/SCELossGradInternal CUDA kernels can handle elements more than std::numeric_limits<int32_t>::max.
* Improvements to LayerNorm fusion
* [Model cache](https://github.com/microsoft/onnxruntime/blob/main/docs/ORTModule_Training_Guidelines.md#ortmodule_cache_dir) for exported onnx model is introduced to avoid repeatedly exporting a model that is not changed across.

On-Device Training
* iOS support available starting this release
* Minimal build now available for On-Device Training. Basic binary size ~1.5 MB
* ORT-Extensions custom op support enabled through onnxblock for on-device training scenarios

ORT Extensions
This ORT release is accompanied by updates to [onnxruntime-extensions](https://github.com/microsoft/onnxruntime-extensions/). Features include:
* New Python API gen_processing_models to export ONNX data processing model from Huggingface Tokenizers such as LLaMA , CLIP, XLM-Roberta, Falcon, BERT, etc.
* New TrieTokenizer operator for RWKV-like LLM models, and other tokenizer operator enhancements.
* New operators for Azure EP compatibility: AzureAudioToText, AzureTextToText, AzureTritonInvoker for Python and NuGet packages.
* Processing operators have been migrated to the new [Lite Custom Op API](https://github.com/microsoft/onnxruntime/blob/gh-pages/docs/reference/operators/add-custom-op.md#define-and-register-a-custom-operator)

---
Known Issues
* ORT CPU Python package requires execution provider to be explicitly provided. See 17631. Fix is in progress to be patched.
---
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[fs-eire](https://github.com/fs-eire), [edgchen1](https://github.com/edgchen1), [snnn](https://github.com/snnn), [pengwa](https://github.com/pengwa), [mszhanyi](https://github.com/mszhanyi), [PeixuanZuo](https://github.com/PeixuanZuo), [tianleiwu](https://github.com/tianleiwu), [adrianlizarraga](https://github.com/adrianlizarraga), [baijumeswani](https://github.com/baijumeswani), [cloudhan](https://github.com/cloudhan), [satyajandhyala](https://github.com/satyajandhyala), [yuslepukhin](https://github.com/yuslepukhin), [RandyShuai](https://github.com/RandyShuai), [RandySheriffH](https://github.com/RandySheriffH), [skottmckay](https://github.com/skottmckay), [Honry](https://github.com/Honry), [dependabot[bot]](https://github.com/dependabot[bot]), [HectorSVC](https://github.com/HectorSVC), [jchen351](https://github.com/jchen351), [chilo-ms](https://github.com/chilo-ms), [YUNQIUGUO](https://github.com/YUNQIUGUO), [justinchuby](https://github.com/justinchuby), [PatriceVignola](https://github.com/PatriceVignola), [guschmue](https://github.com/guschmue), [yf711](https://github.com/yf711), [Craigacp](https://github.com/Craigacp), [smk2007](https://github.com/smk2007), [RyanUnderhill](https://github.com/RyanUnderhill), [jslhcl](https://github.com/jslhcl), [wschin](https://github.com/wschin), [kunal-vaishnavi](https://github.com/kunal-vaishnavi), [mindest](https://github.com/mindest), [xadupre](https://github.com/xadupre), [fdwr](https://github.com/fdwr), [hariharans29](https://github.com/hariharans29), [AdamLouly](https://github.com/AdamLouly), [wejoncy](https://github.com/wejoncy), [chenfucn](https://github.com/chenfucn), [pranavsharma](https://github.com/pranavsharma), [yufenglee](https://github.com/yufenglee), [zhijxu-MS](https://github.com/zhijxu-MS), [jeffdaily](https://github.com/jeffdaily), [natke](https://github.com/natke), [jeffbloo](https://github.com/jeffbloo), [liqunfu](https://github.com/liqunfu), [wangyems](https://github.com/wangyems), [er3x3](https://github.com/er3x3), [nums11](https://github.com/nums11), [yihonglyu](https://github.com/yihonglyu), [sumitsays](https://github.com/sumitsays), [zhanghuanrong](https://github.com/zhanghuanrong), [askhade](https://github.com/askhade), [wenbingl](https://github.com/wenbingl), [jingyanwangms](https://github.com/jingyanwangms), [ashari4](https://github.com/ashari4), [gramalingam](https://github.com/gramalingam), [georgen117](https://github.com/georgen117), [sfatimar](https://github.com/sfatimar), [BowenBao](https://github.com/BowenBao), [hanbitmyths](https://github.com/hanbitmyths), [stevenlix](https://github.com/stevenlix), [jywu-msft](https://github.com/jywu-msft)

Page 1 of 8

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.