Changelogs >


PyUp actively tracks 437,546 Python packages for vulnerabilities to keep your Python environments secure.

Scan your dependencies


Upgrade log4j to 2.17.1 to fixe [CVE-2021-44832]( in 2.17.0


Not secure
Upgrade log4j2 version to 2.17.0 to address [CVE-2021-45046]( and [CVE-2021-45105](


Not secure
* Added disruptor dependency for async logging
* If using `sagemaker-inference-toolkit`, upgrade to version >= [v1.5.9](


Not secure
* Upgrade [log4j2]( version to 2.16.0 to address [CVE-2021-44228]( and [CVE-2021-45046](
* Updated logging docs to address migration from log4j v1 to v2
* Code fix for setting `preload_model` default as null for register model request
* Fix channel closures in ModelServerTest


Not secure


Not secure
Allows custom HTTP status in mms.service.Service to be returned to client


Not secure
This release contains minor fixes to bump up Netty & Log4j versions.


Not secure
This release contains minor fixes to make sure resource cleaning is done for terminated worker threads
* Terminates the STDOUT and STDERR ReaderThreads for a Worker when it is scaled down


This release contains API changes and fixes to make sure resource cleaning is handled synchronously.
* Load model API sends a conflict response instead of a bad request response when trying to register an already registered model. 851
* Unregister model API is now synchronous and will wait until all resources are cleaned before sending a response back. A timeout feature was also added to config if users don't want to wait. 853


This release contains a minor bug fix for Python 2 support.
* Changed the python protocol handler between frontend and backend to support python 2 better.


* Load model API takes in JSON requests. 818
* Implementation of Ping API using the plugins SDK. 814
* Newer endpoint for predictions. `POST /models/{model-id}/invoke`. 823
* Handling OOM errors. MMS returns a HTTP 507 error code when there is a OOM error during runtime of MMS. 822
* Added changes to allow MMS have the same Management and Inference addresses 826
* Changes to MMS default behavior. MMS by default runs `POST /models` in a synchronous way and if there are `default_workers_per_model`, this value will be used when loading models. 836
* MMS configuration values can take environment variables. 841


This release contains multiple model server changes

Major features
1. Plugins support
1. SDK for plugins
1. Reference plugins implementation
1. MMS changes to support plugins
1. Feature to support default service file configured.
1. Feature to support return of custom HTTP headers from the model.

Minor features
1. Option to run MMS in the foreground
And multiple bug fixes


This release contains multiple model-archiver features and bug fixes.

1. Added support for "no-archive"
2. Added feature to support optional conversion of ONNX model to MXNet model
3. Added integration test framework for model-archiver.


Features and Bug Fixes
* Published base MMS containers for Python 2.7 and Python 3.6 with Ubuntu 16.04 and nvidia/cuda 9.2 with CUDNN 7 on ubuntu 16.04.
* model-archiver changes to handle multiple archive formats
* model-server configurable through environment variables
* Contains multiple bug fixes


In this release we have addressed all the reported bugs and also added enhancements such as

Features and Major Bug Fixes
* Frontend listening on Unix Domain Socket.
* Support Asynchronous logging.
* Added documentation for batching support.
* Added features to support
* Starting default number of workers for models that are launched at MMS Startup time.
* Configurable response time out for individual models. This is the amount of time MMS waits for the model to respond to a request.
* Configure Maximum allowable request and response sizes.
* Changes for new Container images.
* Passing all HTTP headers to the backend worker.
* Adding shufflenet to the model server model-zoo.
* Adding example to bring sockeye model onto MMS.
... And bug fixes


In this release of MXNet Model Server, we have added the following features.

Features and Bug fixes
* Changes for batching support.
* CORS headers support added to responses.
* Handle content-type returned by the backend code and pass ContentType to the service code
* Workaround import mxnet module timeout issue. Now MXNet startup time doesn't cause significant delay upon MMS start on compute optimized hosts
* Make sure that python prints are not buffered
* Refactor metrics emission logic
* Always use utf-8 to decode bytes.
* Avoid archiving a model archive file recursively.
* Pythonpath issues for MMS
* Documentation updates


In this release of MXNet Model Server, we have added the following major features.

1. Loading and Unloading models at run-time (hot loading models). This is now available via management REST API exposed by MMS. More on management API [here](
2. Independently scale number of model-worker instances serving inference requests. This is available through management REST API.
3. Improved model archive representation. More on model-archiver is [here](
4. Improved docker container images.
5. Improved performance compared to MMS v0.4 and decreased dependencies. One of the major changes is replacing monolithic architecture with separate frontend and backend. Netty is used as frontend webserver instead of Flask+GUnicorn combo. Python is for the backend.
6. Improved logging and metrics collection. Using log4j and corresponding config to control metrics, including custom user metrics. More on logging config is [here](

New and updated documents:
1. [Migration document]( to migrate from MMS 0.4 to MMS 1.0.
1. [New Management API](
1. [Updated model zoo](
1. [Updated Inference API](

For further documentation, please refer [/docs]( folder

Bug fixes:
This release fixes all the bugs logged on GitHub.


New Features:
Gluon imperative model support
1. Added support for serving [Gluon based imperative models](

Docs, Improvements, Bug fixes
* Added documentation on Gluon model services.
* Added [alexnet in Gluon]( to model serving to examples.
* Added [Character-level Convolutional Neural Networks in Gluon]( to model serving examples.

* Gluon base service class implementation. (vdantu )
* Improved Docker image setup time by , layering docker images. (vrakesh, 343 )
* Docker image now can auto-detect number of available CPUs. (vrakesh, 360 )
* Added pylint support. (vdantu)
* Use cu90mkl mxnet on cuda gpu machines by default. (vrakesh, 390 )

Bug Fixes:
* Fixed an issue, where empty folder was created when invalid model path is specified. (vrakesh, 320 )
* Docker images now do not allow multiple instances of MMS to run. (vrakesh, 337 )
* fixed pypi summary issue. (aaronmarkham, 378 )
* Fixed error propagation from custom service to MMS. (vrakesh, 387 )
* Fixed documentation bugs. (vrakesh, 401 , 402 )
* Fixed version reading issue in MMS. (vrakesh, 395 )
* Fixed post process latencies being high due to inference variables being lazy evaluated. (vrakesh, 414 )


New Features:
New CLI to interact with MMS running in a container
1. New options to start/stop/restart MMS in container.
2. Option to point to different configuration files for each MMS run.
3. Multiple bug fixes.

Optimized and pre-configured MMS container images
1. Published the container image to Docker Hub.
2. The default configuration in these containers and the example configuration in the repository are optimized for CPU and GPU AWS EC2 instances.

Bug fixes and Docs
* README documents.
* Added docs to depict orchestrating MMS as an AWS FARGATE service.
* Added docs for optimizing the MMS configuration for different EC2 instances.

Bug Fixes:
* Corrected Readme and advanced-settings doc for MMS container (aaronmarkham )
* Documentation for optimised setup for GPU and CPU EC2 instances (ankkhedia )
* Optimized MMS GPU-container to utilize all GPUs in an GPU instance (ankkhedia )
* Documentation for launching MMS on AWS Fargate service
* Added integration tests framework (ankkhedia )
* Doc update on Production usage. Describes why Container images are better for prod. (336)
* Streamlining Container based MMS orchestration (vdantu)
* Optimized the model file downloads for container runs of MMS. (vdantu)
* Fixed bugs in mxnet-model-export (ankkhedia )


New features

ONNX model support

Model server now supports models stored in the [Open Neural Network Exchange (ONNX)]( format. See [Export an ONNX Model]( for details.

Cloudwatch metrics

Model server can publish host and model related metrics to [Amazon Cloudwatch]( See [Cloudwatch metrics]( for details.

Improvements and bug fixes

- Fixing LatencyOverall unit reporting (lupesko, 317)
- update onnx-mxnet (jesterhazy, 316)
- remove docs images (jesterhazy, 315)
- added cloudwatch metrics section (aaronmarkham, 314)
- update docker scripts (jesterhazy, 313)
- added toc, logos, and kitten image (aaronmarkham, 311)
- add unit test for hyphenated model files (jesterhazy, 308)
- Fix validate_prefix_match (knjcode, 307)
- align metrics names/units with standard cloudwatch metrics (jesterhazy, 303)
- Fix race condition when multiple gunicorn workers try to download same models (yuruofeifei, 302)
- Fix epoch number validation (knjcode, 298)
- remove License field (jesterhazy, 297)
- update error messages for model export (jesterhazy, 295)
- Fixing and updating docker setup (lupesko, 294)
- public domain image examples for SSD outputs (aaronmarkham, 291)
- refactored export info; added shortcuts and other improvements (aaronmarkham, 290)
- added four onnx-exported models to zoo; added onnx support to server intro (aaronmarkham, 288)
- Documentation updates for 0.2 (aaronmarkham, 285)
- Improve cloudwatch integration, fix several issues. (yuruofeifei, 283)
- fail fast when user tries to serve onnx model directly (jesterhazy, 280)
- fix importlib warning (254) (jesterhazy, 279)
- bump version (yuruofeifei, 250)
- Onnx and metrics docs (yuruofeifei, 244)
- Add onnx support (yuruofeifei, 240)
- Zoo updates (aaronmarkham, 234)
- Zoo updates with details for each model (aaronmarkham, 233)


Key capabilities of Model Server for Apache MXNet v0.1.5:
- Tooling to package and export all model artifacts into a single “model archive” file that encapsulates everything required for serving an MXNet model.
- Automated setup of a serving stack, including HTTP inference endpoints, MXNet-based engine, all automatically configured for the specific models being hosted.
- Pre-configured Docker images, setup with NGINX, MXNet and MMS, for scalable model serving.
- Ability to customize every step in the inference execution pipeline, from model initialization, through pre-processing and inference, and up to post-processing the model’s output.
- Real time operational metrics to monitor the inference service and endpoints, covering key metrics such as latencies, resource utilization and errors.
- OpenAPI-enabled service, that is easy to integrate with, and that can auto-generate client code for popular stacks such as Java, JavaScript, C and more.