Mlx

Latest version: v0.13.1

Safety actively analyzes 630217 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 4

0.0.7

Core

- Support for loading and saving HuggingFace's safetensor format
- Transposed quantization matmul kernels
- `mlx.core.linalg` sub-package with `mx.linalg.norm` (Frobenius, infininty, p-norms)
- `tensordot` and `repeat`

NN
- Layers
- `Bilinear`,`Identity`, `InstanceNorm`
- `Dropout2D`, `Dropout3D`
- more customizable `Transformer` (pre/post norm, dropout)
- More activations: `SoftSign`, `Softmax`, `HardSwish`, `LogSoftmax`
- Configurable scale in `RoPE` positional encodings
- Losses: `hinge`, `huber`, `log_cosh`

Misc
- Faster GPU reductions for certain cases
- Change to memory allocation to allow swapping

0.0.6

Core

- quantize, dequantize, quantized_matmul
- moveaxis, swapaxes, flatten
- stack
- floor, ceil, clip
- tril, triu, tri
- linspace

Optimizers
- RMSProp, Adamax, Adadelta, Lion

NN

- Layers: `QuantizedLinear`, `ALiBi` positional encodings
- Losses: Label smoothing, Smooth L1 loss, Triplet loss

Misc
- Bug fixes

0.0.5

- Core ops `remainder`, `eye`, `identity`
- Additional functionality in `mlx.nn`
- Losses: binary cross entropy, kl divergence, mse, l1
- Activations: PRELU, Mish, and several others
- More optimizers: AdamW, Nesterov momentum, Adagrad
- Bug fixes

Page 4 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.