Carefree-learn

Latest version: v0.5.0

Safety actively analyzes 630406 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 5

0.1.10

Distributed Training

`carefree-learn` now provides out-of-the-box API for distributed training with `deepspeed`:

python
import cflearn
import numpy as np

x = np.random.random([1000000, 10])
y = np.random.random([1000000, 1])
m = cflearn.deepspeed(x, y, cuda="0,1,2,3").m


⚠️⚠️⚠️ However it is not recommended to use this API unless you have to (e.g. when training some very large models). ⚠️⚠️⚠️


Misc

+ Supported `use_final_bn` in `FCNNHead` (75).
+ Ensured that models are always in eval mode in inference (67).
+ Supported specifying `resource_config` of `Parallel` in `Experiment` (68).
+ Implemented `profile_forward` for `Pipeline`.

---

+ Fixed other bugs.
+ Accelerated `DNDF` with `Function` (69).
+ Patience of `TrainMonitor` will now depend on dataset size (43).
+ Checkpoints will be logged earlier now when using `warmup` (74).

0.1.9

Release Notes

`carefree-learn 0.1.9` improved overall performances and accessibilities.


`ModelConfig`

`carefree-learn` now introduces `ModelConfig` to manage configurations more easily.

Modify `extractor_config`, `head_config`, etc

<table align=center>
<tr>
<td align=center><b>v0.1.8</b></td>
<td align=center><b>v0.1.9</b></td>
</tr>
<tr>
<td>
<pre lang="python">
head_config = {...}
cflearn.make(
model_config={
"pipe_configs": {
"fcnn": {"head": head_config},
},
},
)
</pre>
</td>
<td>
<pre lang="python">
head_config = {...}
cflearn.ModelConfig("fcnn").switch().head_config = head_config
</pre>
</td>
</tr>
</table>

Switch to a preset config

<table align=center>
<tr>
<td align=center><b>v0.1.8</b></td>
<td align=center><b>v0.1.9</b></td>
</tr>
<tr>
<td>
<pre lang="python">
Not accessible, must register a new model
with the corresponding config:
cflearn.register_model(
"pruned_fcnn",
pipes=[
cflearn.PipeInfo(
"fcnn",
head_config="pruned",
)
],
)
cflearn.make("pruned_fcnn")
</pre>
</td>
<td>
<pre lang="python">
cflearn.ModelConfig("fcnn").switch().replace(head_config="pruned")
cflearn.make("fcnn")
</pre>
</td>
</tr>
</table>


Misc

+ Enhanced `LossBase` (66).
+ Introduced callbacks to `Trainer` (65).
+ Enhanced `Auto` and support specifying `extra_config` with json file path (752f419).

---

+ Fixed other bugs.
+ Optimized `Transformer` (adce2f9).
+ Optimized performance of `TrainMonitor` (91dfc43).
+ Optimized performance of `Auto` (47caa48, 9dfa204, 274b28d and 61, 63, 64).

0.1.8

Release Notes

`carefree-learn 0.1.8` mainly registered all PyTorch schedulers and enhanced `mlflow` integration.


Backward Compatible Breaking

`carefree-learn` now keeps a copy of the orignal user defined configs (48), which changes the saved config file:

<table align=center>
<tr>
<td align=center><b>v0.1.7 (config.json)</b></td>
<td align=center><b>v0.1.8 (config_bundle.json)</b></td>
</tr>
<tr>
<td>
<pre lang="hjson">
{
"data_config": {
"label_name": "Survived"
},
"cuda": 0,
"model": "tree_dnn"
// the `binary_config` was injected into `config.json`
"binary_config": {
"binary_metric": "acc",
"binary_threshold": 0.49170631170272827
}
}
</pre>
</td>
<td>
<pre lang="json">
{
"config": {
"data_config": {
"label_name": "Survived"
},
"cuda": 0,
"model": "tree_dnn"
},
"increment_config": {},
"binary_config": {
"binary_metric": "acc",
"binary_threshold": 0.49170631170272827
}
}
</pre>
</td>
</tr>
</table>


New Schedulers

`carefree-learn` newly supports the following schedulers based on PyTorch schedulers:
+ [`step`](https://github.com/carefree0910/carefree-learn/blob/ccfbcb2d13d84a502c4e1d27249a10e4a7c36f2b/cflearn/modules/schedulers.py#L31): the `StepLR` with `lr_floor` supported.
+ [`exponential`](https://github.com/carefree0910/carefree-learn/blob/ccfbcb2d13d84a502c4e1d27249a10e4a7c36f2b/cflearn/modules/schedulers.py#L53): the `ExponentialLR` with `lr_floor` supported.
+ [`cyclic`](https://github.com/carefree0910/carefree-learn/blob/ccfbcb2d13d84a502c4e1d27249a10e4a7c36f2b/cflearn/modules/schedulers.py#L25): the `CyclicLR`.
+ [`cosine`](https://github.com/carefree0910/carefree-learn/blob/ccfbcb2d13d84a502c4e1d27249a10e4a7c36f2b/cflearn/modules/schedulers.py#L26): the `CosineAnnealingLR`.
+ [`cosine_restarts`](https://github.com/carefree0910/carefree-learn/blob/ccfbcb2d13d84a502c4e1d27249a10e4a7c36f2b/cflearn/modules/schedulers.py#L27): the `CosineAnnealingWarmRestarts`.

These schedulers could be utilized easily with `scheduler=...` specified in any high-level API in `carefree-learn`, e.g.:

python
m = cflearn.make(scheduler="cyclic").fit(x, y)



Better mlflow Integration

In order to utilize [`mlflow`](https://mlflow.org/) better, `carefree-learn` now handles some better practices for you under the hood, e.g.:
+ Makes the initialization of mlflow multi-thread safe in distributed training.
+ Automatically handles the `run_name` in distributed training.
+ Automatically handles the parameters for `log_params`.
+ Updates the artifacts in periodically.

> The (brief) documentation for mlflow Integration could be found [here](https://carefree0910.me/carefree-learn-doc/docs/user-guides/mlflow).

0.1.7.1

Release Notes

`carefree-learn 0.1.7` integrated [`mlflow`](https://mlflow.org/) and cleaned up [`Experiment`](https://carefree0910.me/carefree-learn-doc/docs/user-guides/distributed#experiment) API, which completes the machine learning lifecycle.

+ `v0.1.7.1`: Hotfixed a critical bug which will load the worst checkpoint saved.


mlflow

[`mlflow`](https://mlflow.org/) can help us visualizing, reproducing, and serving our models. In `carefree-learn`, we can quickly play with [`mlflow`](https://mlflow.org/) by specifying `mlflow_config` to an empty `dict`:

python
import cflearn
import numpy as np

x = np.random.random([1000, 10])
y = np.random.random([1000, 1])
m = cflearn.make(mlflow_config={}).fit(x, y)


After which, we can execute `mlflow ui` in the current working directory to inspect the tracking results (e.g. loss curve, metric curve, etc.).

> We're planning to add documentation for the mlflow integration and it should be available at `v0.1.8`.


Experiment

[`Experiment`](https://carefree0910.me/carefree-learn-doc/docs/user-guides/distributed#experiment) API was embarrassingly user unfriendly before, but has been cleaned up and is ready to use since `v0.1.7`. Please refer to the [documentation](https://carefree0910.me/carefree-learn-doc/docs/user-guides/distributed#experiment) for more details.


Misc

+ Integrated [`DeepSpeed`](https://www.deepspeed.ai/) for distributed training on one single model (experimental).
+ Enhanced `Protocol` for downstream usages (e.g. Quantitative Trading, Computer Vision, etc.) (experimental).

---

+ Fixed other bugs.
+ Optimized `TrainMonitor` (39)
+ Optimized some default settings.

0.1.6

Release Notes

`carefree-learn 0.1.6` is mainly a hot-fix version for `0.1.5`.


Misc

+ Simplified Pipeline.load (0033bda).
+ Generalized `input_sample` (828e985).
+ Implemented `register_aggregator`.

0.1.5

Miscellaneous fixes and updates.

Page 4 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.