PyPi: Mxnet

CVE-2021-44832

Transitive

Safety vulnerability ID: 44455

This vulnerability was reviewed by experts

The information on this page was manually curated by our Cybersecurity Intelligence Team.

Created at Dec 28, 2021 Updated at Nov 07, 2023
Scan your Python projects for vulnerabilities →

Advisory

Mxnet between versions 1.4.0 and 1.6.0 (included) use a version of 'log4j' affected by critical and severe vulnerabilities.

Affected package

mxnet

Latest version: 1.9.1

Apache MXNet is an ultra-scalable deep learning framework. This version uses openblas and MKLDNN.

Affected versions

Fixed versions

Vulnerability changelog

New Features

Automatic Mixed Precision(experimental)
Training Deep Learning networks is a very computationally intensive task. Novel model architectures tend to have increasing numbers of layers and parameters, which slow down training. Fortunately, software optimizations and new generations of training hardware make it a feasible task.
However, most of the hardware and software optimization opportunities exist in exploiting lower precision (e.g. FP16) to, for example, utilize Tensor Cores available on new Volta and Turing GPUs. While training in FP16 showed great success in image classification tasks, other more complicated neural networks typically stayed in FP32 due to difficulties in applying the FP16 training guidelines.
That is where AMP (Automatic Mixed Precision) comes into play. It automatically applies the guidelines of FP16 training, using FP16 precision where it provides the most benefit, while conservatively keeping in full FP32 precision operations unsafe to do in FP16. To learn more about AMP, check out this [tutorial](https://github.com/apache/incubator-mxnet/blob/master/docs/tutorials/amp/amp_tutorial.md).

MKL-DNN Reduced precision inference and RNN API support
Two advanced features, fused computation and reduced-precision kernels, are introduced by MKL-DNN in the recent version. These features can significantly speed up the inference performance on CPU for a broad range of deep learning topologies. MXNet MKL-DNN backend provides optimized implementations for various operators covering a broad range of applications including image classification, object detection, and natural language processing. Refer to the [MKL-DNN operator documentation](https://github.com/apache/incubator-mxnet/blob/v1.5.x/docs/tutorials/mkldnn/operator_list.md) for more information.

Dynamic Shape(experimental)
MXNet now supports Dynamic Shape in both imperative and symbolic mode. MXNet used to require that operators statically infer the output shapes from the input shapes. However, there exist some operators that don't meet this requirement. Examples are:
* while_loop: its output size depends on the number of iterations in the loop.
* boolean indexing: its output size depends on the value of the input data.
* many operators can be extended to take a shape symbol as input and the shape symbol can determine the output shape of these operators (with this extension, the symbol interface of MXNet can fully support shape).
To support dynamic shape and such operators, we have modified MXNet backend. Now MXNet supports operators with dynamic shape such as [`contrib.while_loop`](https://mxnet.apache.org/api/python/ndarray/contrib.html#mxnet.ndarray.contrib.while_loop), [`contrib.cond`](https://mxnet.apache.org/api/python/ndarray/contrib.html#mxnet.ndarray.contrib.cond), and [`mxnet.ndarray.contrib.boolean_mask`](https://mxnet.apache.org/api/python/ndarray/contrib.html#contrib)
Note: Currently dynamic shape does not work with Gluon deferred initialization.

Large Tensor Support
Currently, MXNet supports maximal tensor size of around 4 billon (2^32). This is due to uint32_t being used as the default data type for tensor size, as well as variable indexing.
This limitation has created many problems when larger tensors are used in the model.
A naive solution to this problem is to replace all uint32_t in the MXNet backend source code to int64_t.
This solution is not viable, however, because many data structures use uint32_t as the data type for its members.
Unnecessarily replacing these variables to int64_t will increase the memory consumption causing another limitation. Second, MXNet has many submodule dependencies.
Updating the variable types in the MXNet repository is not enough. We also need to make sure different libraries, such as MKLDNN, MShadow etc. supports the int64_t integer data type.
Third, many front end APIs assume unsigned 32-bit integer interface. Only updating the interface in C/C++ will cause all the language bindings to fail.
Therefore, we need a systematic approach to enhance MXNet to support large tensors.
Now you can enable large tensor support by changing the following build flag to 1: `USE_INT64_TENSOR_SIZE = 1`. Note this is set to 0 by default.
For more details please refer to the [design document](https://cwiki.apache.org/confluence/display/MXNET/Large+Tensor+Support).

Dependency Update
MXNet has added support for CUDA 10, CUDA 10.1, cudnn7.5, NCCL 2.4.2, and numpy 1.16.0.
These updates are available through PyPI packages and build from source, refer to [installation guide](https://mxnet.apache.org/versions/master/install/index.html) for more details.

Gluon Fit API(experimental)
Training a model in Gluon requires users to write the training loop. This is useful because of its imperative nature, however repeating the same code across multiple models can become tedious and repetitive with boilerplate code.
The training loop can also be overwhelming to some users new to deep learning. We have introduced an Estimator and Fit API to help facilitate training loop.
Note: this feature is still experimental, for more details, refer to [design document](https://cwiki.apache.org/confluence/display/MXNET/Gluon+Fit+API+-+Tech+Design).

New Operators
* split_v2 (13687)
* Gradient multiplier (contrib) operator (13632)
* Image normalize operator - GPU support, 3D/4D inputs (13802)
* Image ToTensor operator - GPU support, 3D/4D inputs (13837)
* Add Gluon Transformer Crop (14259)
* GELU (14449)
* AdamW operator (Fixing Weight Decay Regularization in Adam) (13728)
* [MXNET-1382] Add the index_array operator (14638)
* add an operator for computing the likelihood of a Hawkes self-exciting process (14683)
* Add numpy linspace (14927)


Feature Improvements

Operators
* make ROIAlign support position-sensitive pooling (13088)
* Add erfinv operator for calculating inverse error function (13811)
* Added optional parameters to BilinearResize2D to do relative scaling (13985)
* MXNET-1295 Adding integer index support to Sequence* family of operators. (13880)
* Export resize and support batch size (14014)
* CUDNN dropout (13896)
* Relaxing type requirements for slice_like op (14097)
* Relaxing type requirements for reshape_like op (14325)
* Parallelize CPU version and add GPU version of boolean_mask op (14090)
* Add NHWC layout support to Pooling (cpu, gpu cuda, gpu cuDNN) (13749)
* Multi-precision AdamW update op (14171)
* [op] add back support for scalar type rescale_grad argument for adamw_update/mp_adamw_update (14221)
* move choose_element_0index to operator (14273)
* Optimize NMS (14290)
* Optimize NMS part 2 (14352)
* add background class in box_nms (14058)
* Use cudnn for dropout by default (14278)
* In-place updates for Nadam, Adadelta, Adamax and SGLD (13960)
* Aggregate SGD (13346)
* Add proper exception message for negative shape in array creation routines (14362)
* Support multi-threading for Custom Operator (14363)
* moveaxis operator now accepts negative indices and sequence of ints as well. (14321)
* Support SyncBatchNorm5D (14542)
* Add nd.power and sym.pow (14606)
* Change RNN OP to stateful (14476)
* Add imresize and copyMakeBorder to mx.image (13357)
* add ctx for rand_ndarray and rand_sparse_ndarray (14966)
* Add cpu implementation for Deformable PSROIPooling (14886)
* Add warning for fp16 inputs with MXNET_SAFE_ACCUMULATION=0 (15046)
* Safe LayerNorm (15002)
* use MXNET_SAFE_ACCUMULATION for softmax accumulator (15037)
* LayerNorm acceleration on GPU (14935)
* Add matrix inversion operator in linalg (14963)
* implementation for equivalence of tf.moments (14842)
* Use env var to enforce safe accumulation in ReduceAxesCompute (14830)
* [MXNet-1211] Factor and "Like" modes in BilinearResize2D operator (13226)
* added extraction/generation of diagonal and triangonal matrices to linalg (14501)
* [Mxnet-1397] Support symbolic api for requantize and dequantize (14749)
* [MXNET-978] Support higher order gradient for `log`. (14992)
* Add cpu implementation for Deformable Convolution (14879)

MKLDNN
* Feature/mkldnn static (13628)
* Feature/mkldnn static 2 (13503)
* support mkl log when dtype is fp32 or fp64 (13150)
* Add reshape op supported by MKL-DNN (12980)
* Move the debug output message into MXNET_MKLDNN_DEBUG (13662)
* Integrate MKLDNN Conv1d and support 3d layout (13530)
* Making MKL-DNN default on MXNet master (13681)
* Add mkldnn OP for slice (13730)
* mkldnn s8 conv API change for master (13903)
* [MKLDNN] Enable signed int8 support for convolution. (13697)
* add mkldnn softmax_output (13699)
* MKLDNN based Quantized FullyConnected Operator and its fusion (14128)
* Fix entropy for uint8 (14150)
* Update MKL-DNN to v0.18 release (was: fix the Dense layer issue) (13668)
* [MKL-DNN] Enable s8 support for inner product and 3d input with flatten=false (14466)
* Optimize transpose operator with MKL-DNN (14545)
* [MKLDNN] Remove repeat parts in MKLDNN.md (14995)
* [MKLDNN] Enable more convolution + activation fusion (14819)
* Update MKL-DNN submodule to v0.19 (14783)
* Add mkldnn_version.h to pip package (14899)
* [MKLDNN] add quantized sum (14614)
* [MKLDNN]Refactor requantize to speed up execution (14608)
* [MKLDNN]Add quantized relu (14604)
* Add MKLDNN headers to pip package (14339)
* add symbolic link to mkldnn header files in include (14300)
* disable default MKLDNN for cross compilation (13893)
* Update MKLDNN_README.md (13653)
* [Quantization] Support zero-size tensor input for quantization flow (15031)
* Support 3D input for MKL-DNN softmax operator (14818)
* Add primitive cache for MKL-DNN sum(elemwise_add operator (14914)
* Fix reshape to add in-place back (14903)
* [int8] Add MobileNetV2_1.0 & ResNet18 Quantization (14823)
* [MKLDNN]Improve quantizeV2 and dequantize latency (14641)
* added mkldnn dependency for plugin compile target (14274)
* Support Quantized Fully Connected by INT8 GEMM (12922)

ONNX
* ONNX export: Instance normalization, Shape (12920)
* ONNX export: Logical operators (12852)
* ONNX import/export: Size (13112)
* ONNX export: Add Flatten before Gemm (13356)
* ONNX import/export: Add missing tests, ONNX export: LogSoftMax (13654)
* ONNX import: Hardmax (13717)
* [MXNET-898] ONNX import/export: Sample_multinomial, ONNX export: GlobalLpPool, LpPool (13500)
* ONNX ops: norm exported and lpnormalization imported (13806)
* [MXNET-880] ONNX export: Random uniform, Random normal, MaxRoiPool (13676)
* ONNX export: Add Crop, Deconvolution and fix the default stride of Pooling to 1 (12399)
* onnx export ops (13821)
* ONNX export: broadcast_to, tile ops (13981)
* ONNX export: Support equal length splits (14121)

TensorRT
* [MXNET-1252][1 of 2] Decouple NNVM to ONNX from NNVM to TenosrRT conversion (13659)
* [MXNET-703] Update to TensorRT 5, ONNX IR 3. Fix inference bugs. (13310)
* [MXNET-703] Minor refactor of TensorRT code (13311)
* reformat trt to use subgraph API, add fp16 support (14040)

FP16 Support
* Update mshadow to support batch_dot with fp16. (13716)
* float32 → float16 cast consistency across implementations (13857)
* modifying SyncBN doc for FP16 use case (14041)
* support dot(vector, vector) for fp16 inputs on GPU (14102)
* softmax for fp16 with fp32 accumulator (14098)
* [MXNET-1327] Allow RNN Layers to be initialized to fp16 (14219)
* fp16 safe norm operator (14616)
* NAG Optimizer with multi-precision support (14568)

Deep Graph Library(DGL) support
* Add graph_compact operator. (13436)
* Accelerate DGL csr neighbor sampling (13588)

Horovod Integration
* Add extra header file to export for error checking (13795)
* whitelist symbols for using MXNet error handling externally (13812)
* Use CPUPinned context in ImageRecordIOParser2 (13980)
* Add pin_device_id option to Gluon DataLoader (14136)

Dynamic Shape
* [MXNET-1315] Add checks for dynamic-shaped operators in CachedOp (14018)
* [MXNET-1325] Make InferShapeAttr a standalone pass (14193)
* [MXNET-1324] Add NaiveRunGraph to imperative utils (14192)
* [MXNET-1352] Allow dynamic shape in while_loop and if conditionals (14393)

Backend Engine
* Add infer_type_partial (14214)
* Tidy up storage allocation and deallocation (14480)
* Add MXEnginePushAsync and MXEnginePushSync C APIs (14615)
* Enhance subgraph API (14113)
* Enhance PartitionGraph (14277)
* Allow clearing gpu cache (14252)
* Fix warning / static function in header. (14900)
* Simplify creation of NodeEntry instances and use emplace_back (14095)
* Add unpooled gpu memory type (14716)
* [MXNET-1398] Enable zero-copy from numpy to MXNet NDArray (14733)
* Use DEFAULT macro in C APIs (14767)
* Avoid unnecessary vector copies in imperative_utils.cc (14665)
* Support populating errors back to MXNet engine in callback (13922)
* Restore save/load ndarray to 1.4.1 (15073)
* Enable serializing/deserializing ndarrays in np_shape semantics (15090)
* [numpy] Support zero-dim and zero-size tensors in MXNet (14661)
* Rename np_compat to np_shape (15063)
* [MXNET-1330] Bring nnvm::Tuple to mxnet::Tuple (14270)

Large Tensor Support
* Large array support for randint (14242)
* [MXNET-1185] Support large array in several operators (part 1) (13418)
* [MXNET-1401] adding more operators to test support for Large Tensor (14944)
* [MXNET-1410]Adding Large Tensor Support for tensor transpose (15059)

Quantization
* Exclude concat layer for gpu quantization (14060)
* Enhance gpu quantization (14094)
* Register fake grad to subgraph and quantized operators (14275)
* Add int8 data loader (14123)

Profiler
* [MXNET-857] Add initial NVTX profiler implementation (12328)

CoreML
* Add more support for mxnet_to_coreml (14222)


Front End API

Gluon
* Add pixelshuffle layers (13571)
* [MXNET-766] add dynamic_unroll RNN for HybridBlock (11948)
* add pos_weight for SigmoidBinaryCrossEntropyLoss (13612)
* Rewrite dataloader with process pool, improves responsiveness and reliability (13447)
* Complimentary gluon DataLoader improvements (13606)
* [Fit-API] Adress PR comments (14885)
* [Fit API] update estimator (14849)
* [MXNET-1396][Fit-API] Update default handler logic (14765)
* [Fit API] improve event handlers (14685)
* move to gluon contrib (14635)
* move estimator to contrib (14633)
* [MXNet-1340][Fit API]Update train stats (14494)
* [MXNet-1334][Fit API]base class for estimator and eventhandler (14346)
* [MXNET-1333] Estimator and Fit API (14629)
* Add support for fast variable-length LSTM (14208)
* Add the Gluon Implementation of Deformable Convolution (14810)
* hybridize rnn and add model graph (13244)

Python
* Python BucketingModule bind() with grad_req = 'add' (13984)
* Refine runtime feature discovery python API and add documentation to ... (14130)
* Runtime feature detection (13549)
* Add dtype visualization to plot_network (14066)
* [MXNET-1359] Adds a multiclass-MCC metric derived from Pearson (14461)
* support long for mx.random.seed (14314)
* Optimization of metric evaluation (13471)
* [MXNET-1403] Disable numpy's writability of NDArray once it is zero-copied to MXNet (14948)
* Refactor ImageRecordIter (14824)


Language Bindings

Scala
* [MXNET-1260] Float64 DType computation support in Scala/Java (13678)
* [MXNET-1000] get Ndarray real value and form it from a NDArray (12690)
* Now passing DType of Label downstream to Label's DataDesc object (14038)
* Scala interpreter instructions (14169)
* Add default parameters for Scala NDArray.arange (13816)
* [MXNET-1287] Up scala comp (14667)
* [MXNET-1385] Improved Scala Init and Macros warning messages (14656)
* Remove all usages of makefile for scala (14013)
* Update scala-package gitignore configuration. (13962)
* [MXNET-1177]Adding Scala Demo to be run as a part of Nightly CI (13823)
* [MXNET-1287] Miscellaneous Scala warning fixes (14658)
* Fix jar path and add missing ones for spark jobs (14020)
* [MXNET-1155] Add scala packageTest utility (13046)
* [MXNET-1195] Cleanup Scala README file (13582)
* Add scalaclean to make clean (14322)
* Add maven wraper to scala project. (13702)
* Add new Maven build for Scala package (13819)
* [MXNET-1287] Feat dep (14668)
* add Apache header on all XML (14138)
* update the version name (14076)
* change to compile time (13835)
* [MXNET-918] Random module (13039)
* Avoid secondary deployment of package to local (14647)

Java
* [MXNET-1180] Java Image API (13807)
* [MXNET-1285] Draw bounding box with Scala/Java Image API (14474)
* Add BERT QA Scala/Java example (14592)
* [MXNET-1232] fix demo and add Eclipse support (13979)
* [MXNET-1331] Removal of non-MXNET classes from JAR (14303)
* Java install info update (13912)
* [MXNET-1226] add Docs update for MXNet Java (14395)
* [MXNET-1383] Java new use of ParamObject (14645)
* MXNET-1302 Exclude commons-codec and commons-io from assembled JAR (14000)

C++
* print error message for mxnet::cpp::Operator::Invoke when failed (14318)
* build docs with CPP package (13983)
* Update inception_inference.cpp (14674)
* Optimize C++ API (13496)

Clojure
* [Clojure] - Add Spec Validations to the Optimizer namespace (13499)
* [Clojure] Add Spec Validations for the Random namespace (13523)
* [Clojure] Correct the versions in the README so they correspond to the latest maven.org release ([13507)
* Port of scala infer package to clojure (13595)
* Clojure example for fixed label-width captcha recognition (13769)
* Update project.clj file to use the snapshots repo to be able to pull (13935)
* [Clojure] Add resource scope to clojure package (13993)
* [clojure-package] improve docstrings in image.clj (14307)
* [Clojure] Helper function for n-dim vector to ndarray (14305)
* [clojure]: add comp-metric based on CompositeEvalMetric (14553)
* [Clojure] enhance draw bounding box (14567)
* [Clojure] Add methods based on NDArrayAPI/SymbolAPI (14195)
* [Clojure] Clojure BERT QA example (14691)
* [clojure-package][wip] add ->nd-vec function in ndarray.clj (14308)
* [Clojure] Correct the versions in the README so they correspond to the latest maven.org release (13507)
* Update version to v1.5.0 including clojure package (13566)
* [clojure][generator] ndarray/symbol api random merged (14800)
* upgrade codox to work with lein 2.9.0 (14133)
* [clojure] fix: image test does not rely on s3 to run (15122)

Julia
* Julia v0.7/1.0 support and drop v0.6 support (12845)
* Julia: split ndarray.jl into several snippets (14001)
* Julia: split symbolic-node.jl into several snippets (14024)
* Julia: rename mx.clip to clamp for NDArray (14027)
* Julia: add binding for runtime feature detection (13992)

Perl:
* Two more gluon loss classes. (14194)

R
* add NAG optimizer to r api (14023)
* R-Package Makefile (14068)


Performance Improvements

* Less cudaGet/SetDevice calls in Gluon execution (13764)
* Improve bulking in Gluon (13890)
* Increase perfomance of BulkAppend and BulkFlush (14067)
* Performance improvement in ToTensor GPU Kernel (14099)
* Performance improvement in Normalize GPU Kernel (14139)
* Bulked op segments to allow Variable nodes (14200)
* Performance improving for MKL-DNN Quantized FullyConnected (14528)
* speedup SequenceMask on GPU (14445)
* Dual stream cudnn Convolution backward() with MXNET_GPU_WORKER_NSTREAMS=2. (14006)
* Speedup `_contrib_index_copy` (14359)
* use mkl sparse matrix to improve performance (14492)
* Re-enable static cached_op optimization (14931)
* Speed up SequenceReverse (14627)
* Improve FC perf when no_bias=False (15033)
* Improve cached_op performance for static mode (14785)


Example and Tutorials

* [MXNET-949] Module API to Gluon API tutorial (12542)
* Support SSD f32/int8 evaluation on COCO dataset (14646)
* [MXNET-1209] Tutorial transpose reshape (13208)
* [Clojure] Add Fine Tuning Sentence Pair Classification BERT Example (14769)
* example/ssd/evaluate/eval_metric.py (14561)
* Add examples of running MXNet with Horovod (14286)
* Added link to landing page for Java examples (14481)
* Update lip reading example (13647)
* [MXNET-1121] Example to demonstrate the inference workflow using RNN (13680)
* [MXNET-1301] Remove the unnecessary WaitAll statements from inception_inference example (13972)
* Modifying clojure CNN text classification example (13865)
* [MXNET-1210 ] Gluon Audio - Example (13325)
* add examples and fix the dependency problem (13620)
* add quantization example to readme (14186)
* Add an inference script providing both accuracy and benchmark result for original wide_n_deep example (13895)
* Update autoencoder example (12933)
* 13813 examples with opencv4/origami (13813)
* [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API (13294)
* Add tutorial on how to use build from source jar (14197)
* Gluon end to end tutorial (13411)
* Update MXNetTutorialTemplate.ipynb (13568)
* Simplifications and some fun stuff for the MNIST Gluon tutorial (13094)
* Clarify dependency on OpenCV in CNN Visualization tutorial. (13495)
* Update row_sparse tutorial (13414)
* add clojure tutorials to index (14814)
* Update lstm_crf.py (14865)


Website

* Version switching user experience improvements (13921)
* fix toctree Sphinx errors (13489)
* fix link (15036)
* fix website build (14148)
* Fixed mailing list addresses (13766)
* website publish updates (14015)
* use relative links; update links (13741)
* update social media section (13705)
* [MXNET] Updated http://data.dmlc.ml/ links to http://data.mxnet.io/ (#15065)

Documentation
* [MXNET-1402] MXNet docs change for 1.4.1 release (14949)
* Add API documentation for upsampling operator with examples (14919)
* Make docblocks for Gluon BatchNorm and SyncBatchNorm consistent with the code (14840)
* [DOC] Update ubuntu install instructions from source (14534)
* [Clojure] Better api docstrings by replacing newlines (14752)
* Fix documentation for bilinear upsampling and add unit test (14035)
* Updated docs for R-package installation (14269)
* [docstring] improve docstring and indentation in `module.clj` (14705)
* The folder python-howto was removed in an earlier commit. The reference to that folder was not removed. Making a PR to remove the reference to this folder to keep documents consistent (14573)
* Updated documentation about nightly tests (14493)
* [Doc] Start the tutorials for MKL-DNN backend (14202)
* [DOC] fix sym.arange doc (14237)
* fix render issue in NDArray linalg docs (14258)
* [clojure-package] fix docstrings in `normal.clj` (14295)
* [DOC] Refine documentation of runtime feature detection (14238)
* [MXNET-1178] updating scala docs (14070)
* Fix website scala doc (14065)
* Return value docs for nd.random.* and sym.random.* (13994)
* Fixing the doc for symbolic version of rand_zipfian (13978)
* fix doc of take operator (13947)
* beta doc fixes (13860)
* [MXNET-1255] update hybridize documentation (13597)
* Update Adam optimizer documentation (13754)
* local docs build feature (13682)
* gluon docfix (13631)
* Added javadocs and improved example instructions (13711)
* [MXNET-1164] Generate the document for cpp-package using Doxygen (12977)
* Fix warning in waitall doc (13618)
* Updated docs for randint operator (13541)
* Update java setup docs for 1.4.0 (13536)
* clarify ops faq regarding docs strings (13492)
* [MXNET-1158] JVM Memory Management Documentation (13105)
* Fixing a 404 in the ubuntu setup doc (13542)
* Fix READMEs for examples (14179)
* [Doc] Add MKL-DNN operator list (14891)
* Fixed some typos in AvgPooling Docs (14324)
* doc fix (13465)
* Change Straight Dope to Dive into Deep Learning (14465)
* [DEV] update code owner (14862)
* Add notes about debug with libstdc++ symbols (13533)
* Mention additional language bindings and add links (14798)
* add contributors from intel (14455)
* what's new - add 1.4.0 release (14435)
* added note about cuda9.2 requirement (14140)
* Remove unnecessary "also" in README.md (14543)
* Updated news.md with the latest mkldnn submodule version (14298)
* add new cloud providers to install page (14039)
* Update NOTICE (14043)
* Update README.md (13973)
* Update profiler doc (13901)
* Add CODEOWNERS for Julia package (13872)
* update code owner (13737)
* Update git clone location to apache github (13706)
* NEWS.md backport from v1.4.x to master (13693)
* Update CODEOWNERS, add Pedro Larroy. (13579)
* [MXNET-1225] Always use config.mk in make install instructions (13364)
* Docs & website sphinx errors squished 🌦 (13488)
* add Qing's Key to master (14180)
* add KEY for zachgk (14965)
* corrected a spellign (14247)

Resources

Use this package?

Scan your Python project for dependency vulnerabilities in two minutes

Scan your application

Severity Details

CVSS Base Score

MEDIUM 6.6

CVSS v3 Details

MEDIUM 6.6
Attack Vector (AV)
NETWORK
Attack Complexity (AC)
HIGH
Privileges Required (PR)
HIGH
User Interaction (UI)
NONE
Scope (S)
UNCHANGED
Confidentiality Impact (C)
HIGH
Integrity Impact (I)
HIGH
Availability Availability (A)
HIGH

CVSS v2 Details

HIGH 8.5
Access Vector (AV)
NETWORK
Access Complexity (AC)
MEDIUM
Authentication (Au)
SINGLE
Confidentiality Impact (C)
COMPLETE
Integrity Impact (I)
COMPLETE
Availability Impact (A)
COMPLETE