Autogluon

Latest version: v1.1.0

Safety actively analyzes 621521 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 6

1.1.0

We're happy to announce the AutoGluon 1.1 release.

AutoGluon 1.1 contains major improvements to the TimeSeries module, achieving a 60% win-rate vs AutoGluon 1.0 through the addition of Chronos, a pretrained model for time series forecasting, along with numerous other enhancements. The other modules have also been enhanced through new features such as Conv-LORA support and improved performance for large tabular datasets between 5 - 30 GB in size. For a full breakdown of AutoGluon 1.1 features, please refer to the feature spotlights and the itemized enhancements below.

Join the community: [![](https://img.shields.io/discord/1043248669505368144?logo=discord&style=flat)](https://discord.gg/wjUmjqAc2N)
Get the latest updates: [![Twitter](https://img.shields.io/twitter/follow/autogluon?style=social)](https://twitter.com/autogluon)

This release supports Python versions 3.8, 3.9, 3.10, and 3.11. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.1.

This release contains **[125 commits from 20 contributors](https://github.com/autogluon/autogluon/compare/v1.0.0...v1.1.0)**!

Full Contributor List (ordered by of commits):

shchur prateekdesai04 Innixma canerturkmen zhiqiangdon tonyhoo AnirudhDagar Harry-zzh suzhoum FANGAreNotGnu nimasteryang lostella dassaswat afmkt npepin-hub mglowacki100 ddelange LennartPurucker taoyang1122 gradientsky

Special thanks to ddelange for their continued assistance with Python 3.11 support and Ray version upgrades!

Spotlight

AutoGluon Achieves Top Placements in ML Competitions!

AutoGluon has experienced [wide-spread adoption on Kaggle](https://www.kaggle.com/search?q=autogluon+sortBy%3Adate) since the AutoGluon 1.0 release.
AutoGluon has been used in over 130 Kaggle notebooks and mentioned in over 100 discussion threads in the past 90 days!
Most excitingly, AutoGluon has already been used to achieve top ranking placements in multiple competitions with thousands of competitors since the start of 2024:

| Placement | Competition | Author | Date | AutoGluon Details | Notes |
|:-----------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------|:-----------|:------------------|:-------------------------------|
| :3rd_place_medal: Rank 3/2303 (Top 0.1%) | [Steel Plate Defect Prediction](https://www.kaggle.com/competitions/playground-series-s4e3/discussion/488127) | [Samvel Kocharyan](https://github.com/samvelkoch) | 2024/03/31 | v1.0, Tabular | Kaggle Playground Series S4E3 |
| :2nd_place_medal: Rank 2/93 (Top 2%) | [Prediction Interval Competition I: Birth Weight](https://www.kaggle.com/competitions/prediction-interval-competition-i-birth-weight/leaderboard) | [Oleksandr Shchur](https://shchur.github.io/) | 2024/03/21 | v1.0, Tabular | |
| :2nd_place_medal: Rank 2/1542 (Top 0.1%) | [WiDS Datathon 2024 Challenge 1](https://www.kaggle.com/competitions/widsdatathon2024-challenge1/discussion/482285) | [lazy_panda](https://www.kaggle.com/byteliberator) | 2024/03/01 | v1.0, Tabular | |
| :2nd_place_medal: Rank 2/3746 (Top 0.1%) | [Multi-Class Prediction of Obesity Risk](https://www.kaggle.com/competitions/playground-series-s4e2/discussion/480939) | [Kirderf](https://twitter.com/kirderf9) | 2024/02/29 | v1.0, Tabular | Kaggle Playground Series S4E2 |
| :2nd_place_medal: Rank 2/3777 (Top 0.1%) | [Binary Classification with a Bank Churn Dataset](https://www.kaggle.com/competitions/playground-series-s4e1/discussion/472496) | [lukaszl](https://www.kaggle.com/lukaszl) | 2024/01/31 | v1.0, Tabular | Kaggle Playground Series S4E1 |
| Rank 4/1718 (Top 0.2%) | [Multi-Class Prediction of Cirrhosis Outcomes](https://www.kaggle.com/competitions/playground-series-s3e26/discussion/464863) | [Kirderf](https://twitter.com/kirderf9) | 2024/01/01 | v1.0, Tabular | Kaggle Playground Series S3E26 |

We are thrilled that the data science community is leveraging AutoGluon as their go-to method to quickly and effectively achieve top-ranking ML solutions! For an up-to-date list of competition solutions using AutoGluon refer to our [AWESOME.md](https://github.com/autogluon/autogluon/blob/master/AWESOME.md#competition-solutions-using-autogluon), and don't hesitate to let us know if you use AutoGluon in a competition!

Chronos, a pretrained model for time series forecasting

AutoGluon-TimeSeries now features [Chronos](https://arxiv.org/abs/2403.07815), a family of forecasting models pretrained on large collections of open-source time series datasets that can generate accurate zero-shot predictions for new unseen data. Check out the [new tutorial](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-chronos.html) to learn how to use Chronos through the familiar `TimeSeriesPredictor` API.


General

- Refactor project README & project Tagline Innixma (3861, 4066)
- Add AWESOME.md competition results and other doc improvements. Innixma (4023)
- Pandas version upgrade. shchur Innixma (4079, 4089)
- PyTorch, CUDA, Lightning version upgrades. prateekdesai04 canerturkmen zhiqiangdon (3982, 3984, 3991, 4006)
- Ray version upgrade. ddelange tonyhoo (3774, 3956)
- Scikit-learn version upgrade. prateekdesai04 (3872, 3881, 3947)
- Various dependency upgrades. Innixma tonyhoo (4024, 4083)

TimeSeries

Highlights
AutoGluon 1.1 comes with numerous new features and improvements to the time series module. These include highly requested functionality such as feature importance, support for categorical covariates, ability to visualize forecasts, and enhancements to logging. The new release also comes with considerable improvements to forecast accuracy, achieving 60% win rate and 3% average error reduction compared to the previous AutoGluon version. These improvements are mostly attributed to the addition of Chronos, improved preprocessing logic, and native handling of missing values.


New Features
- Add Chronos pretrained forecasting model ([tutorial](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-chronos.html)). canerturkmen shchur lostella (#3978, 4013, 4052, 4055, 4056, 4061, 4092, 4098)
- Measure the importance of features & covariates on the forecast accuracy with `TimeSeriesPredictor.feature_importance()`. canerturkmen (4033, 4087)
- Native missing values support (no imputation required). shchur (3995, 4068, 4091)
- Add support for categorical covariates. shchur (3874, 4037)
- Improve inference speed by persisting models in memory with `TimeSeriesPredictor.persist()`. canerturkmen (4005)
- Visualize forecasts with `TimeSeriesPredictor.plot()`. shchur (3889)
- Add `RMSLE` evaluation metric. canerturkmen (3938)
- Enable logging to file. canerturkmen (3877)
- Add option to keep lightning logs after training with `keep_lightning_logs` hyperparameter. shchur (3937)

Fixes and Improvements
- Automatically preprocess real-valued covariates shchur (4042, 4069)
- Add option to skip model selection when only one model is trained. shchur (4002)
- Ensure all metrics handle missing values in target shchur (3966)
- Fix bug when loading a GPU trained model on a CPU machine shchur (3979)
- Fix inconsistent random seed. canerturkmen shchur (3934, 4099)
- Fix crash when calling .info after load. afmkt (3900)
- Fix leaderboard crash when no models trained. shchur (3849)
- Add prototype TabRepo simulation artifact generation. shchur (3829)
- Fix refit_full bug. shchur (3820)
- Documentation improvements, hide deprecated methods. shchur (3764, 4054, 4098)
- Minor fixes. canerturkmen, shchur, AnirudhDagar (4009, 4040, 4041, 4051, 4070, 4094)

AutoMM

Highlights

AutoMM 1.1 introduces the innovative Conv-LoRA, a parameter-efficient fine-tuning (PEFT) method stemming from our latest paper presented at ICLR 2024, titled "[Convolution Meets LoRA: Parameter Efficient Finetuning for Segment Anything Model](https://arxiv.org/abs/2401.17868)". Conv-LoRA is designed for fine-tuning the Segment Anything Model, exhibiting superior performance compared to previous PEFT approaches, such as LoRA and visual prompt tuning, across various semantic segmentation tasks in diverse domains including natural images, agriculture, remote sensing, and healthcare. Check out [our Conv-LoRA example](https://github.com/autogluon/autogluon/tree/master/examples/automm/Conv-LoRA).

New Features

- Added Conv-LoRA, a new parameter efficient fine-tuning method. Harry-zzh zhiqiangdon (3933, 3999, 4007, 4022, 4025)
- Added support for new column type: 'image_base64_str'. Harry-zzh zhiqiangdon (3867)
- Added support for loading pre-trained weights in FT-Transformer. taoyang1122 zhiqiangdon (3859)

Fixes and Improvements

- Fixed bugs in semantic segmentation. Harry-zzh (3801, 3812)
- Fixed crashes when using F1 metric. suzhoum (3822)
- Fixed bugs in PEFT methods. Harry-zzh (3840)
- Accelerated object detection training by ~30\% for the high_quality and best_quality presets. FANGAreNotGnu (3970)
- Depreciated Grounding-DINO FANGAreNotGnu (3974)
- Fixed lightning upgrade issues zhiqiangdon (3991)
- Fixed using f1, f1_macro, f1_micro for binary classification in knowledge distillation. nimasteryang (3837)
- Removed MyMuPDF from installation due to the license issue. Users need to install it by themselves to do document classification. zhiqiangdon (4093)


Tabular

Highlights
AutoGluon-Tabular 1.1 primarily focuses on bug fixes and stability improvements. In particular, we have greatly improved the runtime performance for large datasets between 5 - 30 GB in size through the usage of subsampling for decision threshold calibration and the weighted ensemble fitting to 1 million rows, maintaining the same quality while being far faster to execute. We also adjusted the default weighted ensemble iterations from 100 to 25, which will speedup all weighted ensemble fit times by 4x. We heavily refactored the `fit_pseudolabel` logic, and it should now achieve noticeably stronger results.

Fixes and Improvements
- Fix return value in `predictor.fit_weighted_ensemble(refit_full=True)`. Innixma (1956)
- Enhance performance on large datasets through subsampling. Innixma (3977)
- Fix refit_full crash when out of memory. Innixma (3977)
- Refactor and enhance `.fit_pseudolabel` logic. Innixma (3930)
- Fix crash in memory check during HPO for LightGBM, CatBoost, and XGBoost. Innixma (3931)
- Fix dynamic stacking on windows. Innixma (3893)
- LightGBM version upgrade. mglowacki100, Innixma (3427)
- Fix memory-safe sub-fits being skipped if Ray is not initialized. LennartPurucker (3868)
- Logging improvements. AnirudhDagar (3873)
- Hide deprecated methods. Innixma (3795)
- Documentation improvements. Innixma AnirudhDagar (2024, 3975, 3976, 3996)

Docs and CI
- Add auto benchmarking report generation. prateekdesai04 (4038, 4039)
- Fix tabular tests for Windows. tonyhoo (4036)
- Fix hanging tabular unit tests. prateekdesai04 (4031)
- Fix CI evaluation. suzhoum (4019)
- Add package version comparison between CI runs prateekdesai04 (3962, 3968, 3972)
- Update conf.py to reflect current year. dassaswat (3932)
- Avoid redundant unit test runs. prateekdesai04 (3942)
- Fix colab notebook links prateekdesai04 (3926)

New Contributors
* npepin-hub made their first contribution in https://github.com/autogluon/autogluon/pull/3898
* afmkt made their first contribution in https://github.com/autogluon/autogluon/pull/3900
* dassaswat made their first contribution in https://github.com/autogluon/autogluon/pull/3932
* nimasteryang made their first contribution in https://github.com/autogluon/autogluon/pull/3837
* zkalson made their first contribution in https://github.com/autogluon/autogluon/pull/4096

1.0

New Features
* Added `dynamic_stacking` predictor fit argument to mitigate [stacked overfitting](https://github.com/autogluon/autogluon/issues/2779#issuecomment-1736468165) LennartPurucker Innixma (3616)
* Added [zeroshot-HPO learned portfolio](https://github.com/autogluon/autogluon/blob/master/tabular/src/autogluon/tabular/configs/zeroshot/zeroshot_portfolio_2023.py) as new hyperparameters for `best_quality` and `high_quality` presets. Innixma geoalgo (#3750)
* Added experimental scikit-learn API compatible wrappers to TabularPredictor. You can access them via `from autogluon.tabular.experimental import TabularClassifier, TabularRegressor`. Innixma (3769)
* Added `predictor.model_failures()` Innixma (3421)
* Added enhanced FT-Transformer taoyang1122 Innixma (3621, 3644, 3692)
* Added `predictor.simulation_artifact()` to support integration with [TabRepo](https://github.com/autogluon/tabrepo) Innixma (#3555)

Performance Improvements
* Enhanced FastAI model quality on regression via output clipping LennartPurucker Innixma (3597)
* Added Skip-connection Weighted Ensemble LennartPurucker (3598)
* Fix memory leaks by using ray processes for sequential fitting LennartPurucker (3614)
* Added dynamic parallel folds support to better utilize compute in low memory scenarios yinweisu Innixma (3511)
* Fixed linear model crashes during HPO and added search space for linear models Innixma (3571, 3720)

Other Enhancements
* Multi-layer stacking now produces deterministic results LennartPurucker (3573)
* Various model dependency updates mglowacki100 (3373)
* Various code cleanup and logging improvements Innixma (3408, 3570, 3652, 3734)

Bug Fixes / Code and Doc Improvements
* Fixed incorrect model memory usage calculation Innixma (3591)
* Fixed `infer_limit` being used incorrectly when bagging Innixma (3467)
* Fixed rare edge-case FastAI model crash Innixma (3416)
* Various minor bug fixes Innixma (3418, 3480)

AutoMM
[AutoGluon Multimodal (AutoMM)](https://auto.gluon.ai/stable/tutorials/multimodal/index.html) is designed to simplify the fine-tuning of foundation models for downstream applications with just three lines of code. It seamlessly integrates with popular model zoos such as [HuggingFace Transformers](https://github.com/huggingface/transformers), [TIMM](https://github.com/huggingface/pytorch-image-models), and [MMDetection](https://github.com/open-mmlab/mmdetection), providing support for a diverse range of data modalities,
including image, text, tabular, and document data, whether used individually or in combination.

New Features

* Semantic Segmentation
* Introducing the new problem type `semantic_segmentation`, for fine-tuning [Segment Anything Model (SAM)](https://segment-anything.com/) with three lines of code. Harry-zzh zhiqiangdon (#3645, 3677, 3697, 3711, 3722, 3728)
* Added comprehensive benchmarks from diverse domains, including natural images, agriculture, remote sensing, and healthcare.
* Utilizing parameter-efficient finetuning (PEFT) [LoRA](https://arxiv.org/abs/2106.09685), showcasing consistent superior performance over alternatives ([VPT](https://arxiv.org/abs/2203.12119), [adaptor](https://arxiv.org/abs/1902.00751), [BitFit](https://arxiv.org/abs/2106.10199), [SAM-adaptor](https://arxiv.org/abs/2304.09148), and [LST](https://arxiv.org/abs/2206.06522)) in the extensive benchmarks.
* Added one [semantic segmentation tutorial](https://auto.gluon.ai/stable/tutorials/multimodal/image_segmentation/beginner_semantic_seg.html) zhiqiangdon (#3716).
* Using [SAM-ViT Huge](https://huggingface.co/facebook/sam-vit-huge) by default (GPU memory > 25GB required).
* Few Shot Classification
* Added the new `few_shot_classification` problem type for training few shot classifiers on images or texts. zhiqiangdon (3662, 3681, 3695)
* Leveraging image/text foundation models to extract features and train SVM classifiers.
* Added one [few shot classification tutorial](https://auto.gluon.ai/stable/tutorials/multimodal/advanced_topics/few_shot_learning.html). zhiqiangdon (#3662)
* Supported [torch.compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) for faster training (experimental and torch >=2.2 required) zhiqiangdon (#3520).

Performance Improvements
* Improved default image backbones, achieving a 100% win-rate on the image benchmark. taoyang1122 (3738)
* Replaced MLPs with FT-Transformer as the default tabular backbones, resulting in a 67% win-rate on the text+tabular benchmark. taoyang1122 (3732)
* Using both the improved default image backbones and FT-Transformer achieves a 62% win-rate on the text+tabular+image benchmark. taoyang1122 (3732, 3738)

Stability Enhancements
* Enabled rigorous multi-GPU CI testing. prateekdesai04 (3566)
* Fixed multi-GPU issues. FANGAreNotGnu (3617 3665 3684 3691, 3639, 3618)

Enhanced Usability
* Supported custom evaluation metrics, which allows defining custom [metric object](https://auto.gluon.ai/dev/tutorials/tabular/advanced/tabular-custom-metric.html) and passing it to the `eval_metric` argument taoyang1122 (#3548)
* Supported multi-GPU training in notebooks (experimental) zhiqiangdon (3484)
* Improved logging with system info zhiqiangdon (3735)

Improved Scalability
* The introduction of the new learner class design facilitates easier support for new tasks and data modalities within AutoMM, enhancing overall scalability. zhiqiangdon (3650, 3685, 3735)

Other Enhancements

* Added the option `hf_text.use_fast` for customizing fast tokenizer usage in `hf_text` models. zhiqiangdon (3379)
* Added fallback evaluation/validation metric, supporting `f1_macro` `f1_micro`, and `f1_weighted`. FANGAreNotGnu (3696)
* Supported multi-GPU inference with the DDP strategy. zhiqiangdon (3445, 3451)
* Upgraded torch to 2.0. zhiqiangdon (3404)
* Upgraded lightning to 2.0 zhiqiangdon (3419)
* Upgraded torchmetrics to 1.0 zhiqiangdon (3422)

Code Improvements

* Refactored AutoMM with the learner class for improved design. zhiqiangdon (3650, 3685, 3735)
* Refactored FT-Transformer. taoyang1122 (3621, 3700)
* Refactored the visualizers of object detection, semantic segmentation, and NER. zhiqiangdon (3716)
* Other code refactor/clean-up: zhiqiangdon FANGAreNotGnu (3383 3399 3434 3667 3684 3695)

Bug Fixes/Doc Improvements

* Fixed HPO for focal loss. suzhoum (3739)
* Fixed one ONNX export issue. AnirudhDagar (3725)
* Improved AutoMM introduction for clarity. zhiqiangdon (3388 3726)
* Improved AutoMM API doc. zhiqiangdon AnirudhDagar (3772 3777)
* Other bug fixes zhiqiangdon FANGAreNotGnu taoyang1122 tonyhoo rsj123 AnirudhDagar (3384, 3424, 3526, 3593, 3615, 3638, 3674, 3693, 3702, 3690, 3729, 3736, 3474, 3456, 3590, 3660)
* Other doc improvements zhiqiangdon FANGAreNotGnu taoyang1122 (3397, 3461, 3579, 3670, 3699, 3710, 3716, 3737, 3744, 3745, 3680)

TimeSeries

Highlights
AutoGluon 1.0 features numerous usability and performance improvements to the TimeSeries module. These include automatic handling of missing data and irregular time series, new forecasting metrics (including custom metric support), advanced time series cross-validation options, and new forecasting models. AutoGluon produces state-of-the-art results in forecast accuracy, achieving [70%+ win rate](https://openreview.net/forum?id=XHIY3cQ8Tew) compared to other popular forecasting frameworks.

New features
- Support for custom forecasting metrics shchur (3760, 3602)
- New forecasting metrics `WAPE`, `RMSSE`, `SQL` + improved [documentation for metrics](https://auto.gluon.ai/dev/tutorials/timeseries/forecasting-metrics.html) melopeo shchur (#3747, 3632, 3510, 3490)
- Improved robustness: `TimeSeriesPredictor` can now handle data with all [pandas frequencies](https://pandas.pydata.org/docs/user_guide/timeseries.html#offset-aliases), irregular timestamps, or missing values represented by `NaN` shchur (3563, 3454)
- New models: intermittent demand forecasting models based on conformal prediction (`ADIDA`, `CrostonClassic`, `CrostonOptimized`, `CrostonSBA`, `IMAPA`); `WaveNet` and `NPTS` from GluonTS; new baseline models (`Average`, `SeasonalAverage`, `Zero`) canerturkmen shchur (3706, 3742, 3606, 3459)
- Advanced cross-validation options: avoid retraining the models for each validation window with `refit_every_n_windows` or adjust the step size between validation windows with `val_step_size` arguments to `TimeSeriesPredictor.fit` shchur (3704, 3537)

Enhancements
- Enable Ray Tune for deep-learning forecasting models canerturkmen (3705)
- Support passing multiple evaluation metrics to `TimeSeriesPredictor.evaluate` shchur (3646)
- Static features can now be passed directly to `TimeSeriesDataFrame.from_path` and `TimeSeriesDataFrame.from_data_frame` constructors shchur (3635)

Performance improvements
- Much more accurate forecasts at low time limits thanks to new presets and updated logic for splitting the training time across models shchur (3749, 3657, 3741)
- Faster training and prediction + lower memory usage for `DirectTabular` and `RecursiveTabular` models (3740, 3620, 3559)
- Enable early stopping and improve inference speed for GluonTS models shchur (3575)
- Reduce import time for `autogluon.timeseries` by moving import statements inside model classes (3514)

Bug Fixes / Code and Doc Improvements
- Improve log messages shchur (3721)
- Add reference to the publication on AutoGluon-TimeSeries to README shchur (3482)
- Align API of `TimeSeriesPredictor` with `TabularPredictor`, remove deprecated methods shchur (3714, 3655, 3396)
- General bug fixes and improvements shchur(3758, 3756, 3755, 3754, 3746, 3743, 3727, 3698, 3654, 3653, 3648, 3628, 3588, 3560, 3558, 3536, 3533, 3523, 3522, 3476, 3463)

EDA

The EDA module will be released at a later time, as it requires additional development effort before it is ready for 1.0.
We will make an announcement when EDA is ready for release. For now, please continue to use `"autogluon.eda==0.8.2"`.

Deprecations

General
* `autogluon.core.spaces` has been deprecated. Please use `autogluon.common.spaces` instead Innixma (3701)

Tabular
Tabular will log warnings if using the deprecated methods. Deprecated methods are planned to be removed in AutoGluon 1.2 Innixma (3701)
* `autogluon.tabular.TabularPredictor`
* `predictor.get_model_names()` -> `predictor.model_names()`
* `predictor.get_model_names_persisted()` -> `predictor.model_names(persisted=True)`
* `predictor.compile_models()` -> `predictor.compile()`
* `predictor.persist_models()` -> `predictor.persist()`
* `predictor.unpersist_models()` -> `predictor.unpersist()`
* `predictor.get_model_best()` -> `predictor.model_best`
* `predictor.get_pred_from_proba()` -> `predictor.predict_from_proba()`
* `predictor.get_oof_pred_proba()` -> `predictor.predict_proba_oof()`
* `predictor.get_oof_pred()` -> `predictor.predict_oof()`
* `predictor.get_model_full_dict()` -> `predictor.model_refit_map()`
* `predictor.get_size_disk()` -> `predictor.disk_usage()`
* `predictor.get_size_disk_per_file()` -> `predictor.disk_usage_per_file()`
* `predictor.leaderboard()` `silent` argument deprecated, replaced by `display`, defaults to False
* Same for `predictor.evaluate()` and `predictor.evaluate_predictions()`

AutoMM

* Deprecated the `FewShotSVMPredictor` in favor of the new `few_shot_classification` problem type zhiqiangdon (3699)
* Deprecated the `AutoMMPredictor` in favor of `MultiModalPredictor` zhiqiangdon (3650)
* `autogluon.multimodal.MultiModalPredictor`
* Deprecated the `config` argument in the fit API. zhiqiangdon (3679)
* Deprecated the `init_scratch` and `pipeline` arguments in the init API zhiqiangdon (3668)

TimeSeries
* `autogluon.timeseries.TimeSeriesPredictor`
* Deprecated argument `TimeSeriesPredictor(ignore_time_index: bool)`. Now, if the data contains irregular timestamps, either convert it to regular frequency with `data = data.convert_frequency(freq)` or provide frequency when creating the predictor as `TimeSeriesPredictor(freq=freq)`.
* `predictor.evaluate()` now returns a dictionary (previously returned a float)
* `predictor.score()` -> `predictor.evaluate()`
* `predictor.get_model_names()` -> `predictor.model_names()`
* `predictor.get_model_best()` -> `predictor.model_best`
* Metric `"mean_wQuantileLoss"` has been renamed to `"WQL"`
* `predictor.leaderboard()` `silent` argument deprecated, replaced by `display`, defaults to False
* When setting `hyperparameters` to a string in `predictor.fit()`, supported values are now `"default"`, `"light"` and `"very_light"`
* `autogluon.timeseries.TimeSeriesDataFrame`
- `df.to_regular_index()` -> `df.convert_frequency()`
- Deprecated method `df.get_reindexed_view()`. Please see deprecation notes for `ignore_time_index` under `TimeSeriesPredictor` above for information on how to deal with irregular timestamps
- Models
- All models based on MXNet (`DeepARMXNet`, `MQCNNMXNet`, `MQRNNMXNet`, `SimpleFeedForwardMXNet`, `TemporalFusionTransformerMXNet`, `TransformerMXNet`) have been removed
- Statistical models from Statmodels (`ARIMA`, `Theta`, `ETS`) have been replaced by their counterparts from StatsForecast (3513). Note that these models now have different hyperparameter names.
- `DirectTabular` is now implemented using `mlforecast` backend (same as `RecursiveTabular`), most hyperparameter names for the model have changed.
- `autogluon.timeseries.TimeSeriesEvaluator` has been deprecated. Please use metrics available in `autogluon.timeseries.metrics` instead.
- `autogluon.timeseries.splitter.MultiWindowSplitter` and `autogluon.timeseries.splitter.LastWindowSplitter` have been deprecated. Please use `num_val_windows` and `val_step_size` arguments to `TimeSeriesPredictor.fit` instead (alternatively, use `autogluon.timeseries.splitter.ExpandingWindowSplitter`).

Papers

AutoGluon-TimeSeries: AutoML for Probabilistic Time Series Forecasting

We have published a paper on AutoGluon-TimeSeries at AutoML Conference 2023 ([Paper Link](https://openreview.net/forum?id=XHIY3cQ8Tew), [YouTube Video](https://www.youtube.com/watch?v=niLmfjXeHnE)). In the paper, we benchmarked AutoGluon and popular open-source forecasting frameworks (including DeepAR, TFT, AutoARIMA, AutoETS, AutoPyTorch). AutoGluon produces SOTA results in point and probabilistic forecasting, and even **achieves 65% win rate against the best-in-hindsight combination of models**.

TabRepo: A Large Scale Repository of Tabular Model Evaluations and its AutoML Applications

We have published a paper on Tabular Zeroshot-HPO ensembling simulation to arXiv ([Paper Link](https://arxiv.org/pdf/2311.02971.pdf), [GitHub](https://github.com/autogluon/tabrepo)). This paper is key to achieving the performance improvements seen in AutoGluon 1.0, and we plan to continue to develop the code-base to support future enhancements.

XTab: Cross-table Pretraining for Tabular Transformers

We have published a paper on tabular Transformer pre-training at ICML 2023 ([Paper Link](https://arxiv.org/abs/2305.06090), [GitHub](https://github.com/BingzhaoZhu/XTab)). In the paper we demonstrate state-of-the-art performance for tabular deep learning models, including being able to match the performance of XGBoost and LightGBM models. While the pre-trained transformer is not yet incorporated into AutoGluon, we plan to integrate it in a future release.

Learning Multimodal Data Augmentation in Feature Space

Our paper on learning multimodal data augmentation was accepted at ICLR 2023 ([Paper Link](https://arxiv.org/pdf/2212.14453.pdf), [GitHub](https://github.com/lzcemma/LeMDA/)). This paper introduces a plug-and-play module to learn multimodal data augmentation in feature space, with no constraints on the identities of the modalities or the relationship between modalities. We show that it can (1) improve the performance of multimodal deep learning architectures, (2) apply to combinations of modalities that have not been previously considered, and (3) achieve state-of-the-art results on a wide range of applications comprised of image, text, and tabular data. This work is not yet incorporated into AutoGluon, but we plan to integrate it in a future release.

Data Augmentation for Object Detection via Controllable Diffusion Models

Our paper on generative object detection data augmentation has been accepted at WACV 2024 (Paper and GitHub link will be available soon). This paper proposes a data augmentation pipeline based on controllable diffusion models and CLIP, with visual prior generation to guide the generation and post-filtering by category-calibrated CLIP scores to control its quality. We demonstrate that the performance improves across various tasks and settings when using our augmentation pipeline with different detectors. Although diffusion models are currently not integrated into AutoGluon, we plan to incorporate the data augmentation techniques in a future release.

Adapting Image Foundation Models for Video Understanding

We have published a paper on how to efficiently adapt image foundation models for video understanding at ICLR 2023 ([Paper Link](https://arxiv.org/pdf/2302.03024.pdf), [GitHub](https://github.com/taoyang1122/adapt-image-models)). This paper introduces spatial adaptation, temporal adaptation and joint adaptation to gradually equip a frozen image model with spatiotemporal reasoning capability. The proposed method achieves competitive or even better performance than traditional full finetuning while largely saving the training cost of large foundation models.

1.0.0

Today is finally the day... AutoGluon 1.0 has arrived!! After [over four years of development](https://automlpodcast.com/episode/autogluon-the-story) and [2061 commits from 111 contributors](https://github.com/autogluon/autogluon/graphs/contributors), we are excited to share with you the culmination of our efforts to create and democratize the most powerful, easy to use, and feature rich automated machine learning system in the world.

AutoGluon 1.0 comes with transformative enhancements to predictive quality resulting from the combination of multiple novel ensembling innovations, spotlighted below. Besides performance enhancements, many other improvements have been made that are detailed in the individual module sections.

This release supports Python versions 3.8, 3.9, 3.10, and 3.11. Loading models trained on older versions of AutoGluon is not supported. Please re-train models using AutoGluon 1.0.

This release contains 223 commits from 17 contributors!

Full Contributor List (ordered by of commits):

shchur, zhiqiangdon, Innixma, prateekdesai04, FANGAreNotGnu, yinweisu, taoyang1122, LennartPurucker, Harry-zzh, AnirudhDagar, jaheba, gradientsky, melopeo, ddelange, tonyhoo, canerturkmen, suzhoum

Join the community: [![](https://img.shields.io/discord/1043248669505368144?logo=discord&style=flat)](https://discord.gg/wjUmjqAc2N)
Get the latest updates: [![Twitter](https://img.shields.io/twitter/follow/autogluon?style=social)](https://twitter.com/autogluon)

Spotlight

Tabular Performance Enhancements

0.8.2

As always, only load previously trained models using the same version of AutoGluon that they were originally trained on.
Loading models trained in different versions of AutoGluon is not supported.

See the full commit change-log here: https://github.com/autogluon/autogluon/compare/0.8.1...0.8.2

This version supports Python versions 3.8, 3.9, and 3.10.

Changes

* codespell: action, config + some typos fixed yarikoptic yinweisu (3323)
* Unpin sentencepiece zhiqiangdon (3368)
* Pin pydantic yinweisu (3370)

0.8.1

v0.8.1 is a bug fix release.

As always, only load previously trained models using the same version of AutoGluon that they were originally trained on.
Loading models trained in different versions of AutoGluon is not supported.

See the full commit change-log here: https://github.com/autogluon/autogluon/compare/0.8.0...0.8.1

This version supports Python versions 3.8, 3.9, and 3.10.

Changes

Documentation improvements

* Update google analytics property gidler (3330)
* Add Discord Link Innixma (3332)
* Add community section to website front page Innixma (3333)
* Update Windows Conda install instructions gidler (3346)
* Add some missing Colab buttons in tutorials gidler (3359)


Bug Fixes / General Improvements

* Move PyMuPDF to optional Innixma zhiqiangdon (3331)
* Remove TIMM in core setup Innixma (3334)
* Update persist_models max_memory 0.1 -> 0.4 Innixma (3338)
* Lint modules yinweisu (3337, 3339, 3344, 3347)
* Remove fairscale zhiqiangdon (3342)
* Fix refit crash Innixma (3348)
* Fix `DirectTabular` model failing for some metrics; hide warnings produced by `AutoARIMA` shchur (3350)
* Pin dependencies yinweisu (3358)
* Reduce per gpu batch size for AutoMM high_quality_hpo to avoid out of memory error for some corner cases zhiqiangdon (3360)
* Fix HPO crash by setting reuse_actor to False yinweisu (3361)

0.8.0

We're happy to announce the AutoGluon 0.8 release.

NEW: [![](https://img.shields.io/discord/1043248669505368144?logo=discord&style=flat)](https://discord.gg/wjUmjqAc2N) Join our official community discord server to ask questions and get involved!

Note: Loading models trained in different versions of AutoGluon is not supported.

This release contains 196 commits from 20 contributors!

See the full commit change-log here: https://github.com/autogluon/autogluon/compare/0.7.0...0.8.0

Special thanks to geoalgo for the joint work in generating the experimental tabular Zeroshot-HPO portfolio this release!

Full Contributor List (ordered by of commits):

shchur, Innixma, yinweisu, gradientsky, FANGAreNotGnu, zhiqiangdon, gidler, liangfu, tonyhoo, cheungdaven, cnpgs, giswqs, suzhoum, yongxinw, isunli, jjaeyeon, xiaochenbin9527, yzhliu, jsharpna, sxjscience

AutoGluon 0.8 supports Python versions 3.8, 3.9, and 3.10.

Changes

Highlights
* AutoGluon TimeSeries introduced several major improvements, including new models, upgraded presets that lead to better forecast accuracy, and optimizations that speed up training & inference.
* AutoGluon Tabular now supports **[calibrating the decision threshold in binary classification](https://auto.gluon.ai/stable/tutorials/tabular/tabular-indepth.html#decision-threshold-calibration)** ([API](https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.calibrate_decision_threshold.html)), leading to massive improvements in metrics such as `f1` and `balanced_accuracy`. It is not uncommon to see `f1` scores improve from `0.70` to `0.73` as an example. We **strongly** encourage all users who are using these metrics to try out the new decision threshold calibration logic.
* AutoGluon MultiModal introduces two new features: 1) [**PDF document classification**](https://auto.gluon.ai/stable/tutorials/multimodal/document/pdf_classification.html), and 2) [**Open Vocabulary Object Detection**](https://auto.gluon.ai/stable/tutorials/multimodal/object_detection/quick_start/quick_start_ovd.html).
* AutoGluon MultiModal upgraded the presets for object detection, now offering `medium_quality`, `high_quality`, and `best_quality` options. The empirical results demonstrate significant ~20% relative improvements in the mAP (mean Average Precision) metric, using the same preset.
* AutoGluon Tabular has added an experimental **Zeroshot HPO config** which performs well on small datasets <10000 rows when at least an hour of training time is provided (~60% win-rate vs `best_quality`). To try it out, specify `presets="experimental_zeroshot_hpo_hybrid"` when calling `fit()`.
* AutoGluon EDA added support for [**Anomaly Detection**](https://auto.gluon.ai/stable/tutorials/eda/eda-auto-anomaly-detection.html) and [**Partial Dependence Plots**](https://auto.gluon.ai/stable/tutorials/eda/eda-auto-analyze-interaction.html#using-interaction-charts-to-learn-information-about-the-data).
* AutoGluon Tabular has added experimental support for **[TabPFN](https://github.com/automl/TabPFN)**, a pre-trained tabular transformer model. Try it out via `pip install autogluon.tabular[all,tabpfn]` (hyperparameter key is "TABPFN")! You can also try it out via specifying `presets="experimental_extreme_quality"`.

General
* General doc improvements tonyhoo Innixma yinweisu gidler cnpgs isunli giswqs (2940, 2953, 2963, 3007, 3027, 3059, 3068, 3083, 3128, 3129, 3130, 3147, 3174, 3187, 3256, 3258, 3280, 3306, 3307, 3311, 3313)
* General code fixes and improvements yinweisu Innixma (2921, 3078, 3113, 3140, 3206)
* CI improvements yinweisu gidler yzhliu liangfu gradientsky (2965, 3008, 3013, 3020, 3046, 3053, 3108, 3135, 3159, 3283, 3185)
* New AutoGluon Webpage gidler shchur (2924)
* Support sample_weight in RMSE jjaeyeon (3052)
* Move AG search space to common yinweisu (3192)
* Deprecation utils yinweisu (3206, 3209)
* Update namespace packages for PEP420 compatibility gradientsky (3228)

Multimodal

AutoGluon MultiModal (also known as AutoMM) introduces two new features: 1) PDF document classification, and 2) Open Vocabulary Object Detection. Additionally, we have upgraded the presets for object detection, now offering `medium_quality`, `high_quality`, and `best_quality` options. The empirical results demonstrate significant ~20% relative improvements in the mAP (mean Average Precision) metric, using the same preset.

New Features
* PDF Document Classification. See [tutorial](https://auto.gluon.ai/stable/tutorials/multimodal/document/pdf_classification.html) cheungdaven (#2864, 3043)
* Open Vocabulary Object Detection. See [tutorial](https://auto.gluon.ai/stable/tutorials/multimodal/object_detection/quick_start/quick_start_ovd.html) FANGAreNotGnu (#3164)

Performance Improvements
* Upgrade the detection engine from mmdet 2.x to mmdet 3.x, and upgrade our presets FANGAreNotGnu (3262)
* `medium_quality`: yolo-s -> yolox-l
* `high_quality`: yolox-l -> DINO-Res50
* `best_quality`: yolox-x -> DINO-Swin_l
* Speedup fusion model training with deepspeed strategy. liangfu (2932)
* Enable detection backbone freezing to boost finetuning speed and save GPU usage FANGAreNotGnu (3220)

Other Enhancements
* Support passing data path to the fit() API zhiqiangdon (3006)
* Upgrade TIMM to the latest v0.9.* zhiqiangdon (3282)
* Support xywh output for object detection FANGAreNotGnu (2948)
* Fusion model inference acceleration with TensorRT liangfu (2836, 2987)
* Support customizing advanced image data augmentation. Users can pass a list of [torchvision transform](https://pytorch.org/vision/stable/transforms.html#geometry) objects as image augmentation. zhiqiangdon (3022)
* Add yoloxm and yoloxtiny FangAreNotGnu (3038)
* Add MultiImageMix Dataset for Object Detection FangAreNotGnu (3094)
* Support loading specific checkpoints. Users can load the intermediate checkpoints other than model.ckpt and last.ckpt. zhiqiangdon (3244)
* Add some predictor properties for model statistics zhiqiangdon (3289)
* `trainable_parameters` returns the number of trainable parameters.
* `total_parameters` returns the number of total parameters.
* `model_size` returns the model size measured by megabytes.

Bug Fixes / Code and Doc Improvements
* General bug fixes and improvements zhiqiangdon liangfu cheungdaven xiaochenbin9527 Innixma FANGAreNotGnu gradientsky yinweisu yongxinw (2939, 2989, 2983, 2998, 3001, 3004, 3006, 3025, 3026, 3048, 3055, 3064, 3070, 3081, 3090, 3103, 3106, 3119, 3155, 3158, 3167, 3180, 3188, 3222, 3261, 3266, 3277, 3279, 3261, 3267)
* General doc improvements suzhoum (3295, 3300)
* Remove clip from fusion models liangfu (2946)
* Refactor inferring problem type and output shape zhiqiangdon (3227)
* Log GPU info including GPU total memory, free memory, GPU card name, and CUDA version during training zhiqaingdon (3291)


Tabular

New Features
* Added `calibrate_decision_threshold` ([tutorial](https://auto.gluon.ai/stable/tutorials/tabular/tabular-indepth.html#decision-threshold-calibration)), which allows to optimize a given metric's decision threshold for predictions to strongly enhance the metric score. Innixma (3298)
* We've added an experimental Zeroshot HPO config, which performs well on small datasets <10000 rows when at least an hour of training time is provided. To try it out, specify `presets="experimental_zeroshot_hpo_hybrid"` when calling `fit()` Innixma geoalgo (3312)
* The [TabPFN model](https://auto.gluon.ai/stable/api/autogluon.tabular.models.html#tabpfnmodel) is now supported as an experimental model. TabPFN is a viable model option when inference speed is not a concern, and the number of rows of training data is less than 10,000. Try it out via `pip install autogluon.tabular[all,tabpfn]`! Innixma (3270)
* Backend support for distributed training, which will be available with the next Cloud module release. yinweisu (3054, 3110, 3115, 3131, 3142, 3179, 3216)
Performance Improvements
* Accelerate boolean preprocessing Innixma (2944)
Other Enhancements
* Add quantile regression support for CatBoost shchur (3165)
* Implement quantile regression for LGBModel shchur (3168)
* Log to file support yinweisu (3232)
* Add support for `included_model_types` yinweisu (3239)
* Add enable_categorical=True support to XGBoost Innixma (3286)
Bug Fixes / Code and Doc Improvements
* Cross-OS loading of a fit TabularPredictor should now work properly yinweisu Innixma
* General bug fixes and improvements Innixma cnpgs shchur yinweisu gradientsky (2865, 2936, 2990, 3045, 3060, 3069, 3148, 3182, 3199, 3226, 3257, 3259, 3268, 3269, 3287, 3288, 3285, 3293, 3294, 3302)
* Move interpretable logic to InterpretableTabularPredictor Innixma (2981)
* Enhance drop_duplicates, enable by default Innixma (3010)
* Refactor params_aux & memory checks Innixma (3033)
* Raise regression `pred_proba` Innixma (3240)


TimeSeries
In v0.8 we introduce several major improvements to the Time Series module, including new models, upgraded presets that lead to better forecast accuracy, and optimizations that speed up training & inference.

Highlights
- New models: `PatchTST` and `DLinear` from GluonTS, and `RecursiveTabular` based on integration with the [`mlforecast`](https://github.com/Nixtla/mlforecast) library shchur (#3177, 3184, 3230)
- Improved accuracy and reduced overall training time thanks to updated presets shchur (3281, 3120)
- 3-6x faster training and inference for `AutoARIMA`, `AutoETS`, `Theta`, `DirectTabular`, `WeightedEnsemble` models shchur (3062, 3214, 3252)

New Features
- Dramatically faster repeated calls to `predict()`, `leaderboard()` and `evaluate()` thanks to prediction caching shchur (3237)
- Reduce overfitting by using multiple validation windows with the `num_val_windows` argument to `fit()` shchur (3080)
- Exclude certain models from presets with the `excluded_model_types` argument to `fit()` shchur (3231)
- New method `refit_full()` that refits models on combined train and validation data shchur (3157)
- Train multiple configurations of the same model by providing lists in the `hyperparameters` argument shchur (3183)
- Time limit set by `time_limit` is now respected by all models shchur (3214)

Enhancements
- Improvements to the `DirectTabular` model (previously called `AutoGluonTabular`): faster featurization, trained as a quantile regression model if `eval_metric` is set to `"mean_wQuantileLoss"` shchur (2973, 3211)
- Use correct seasonal period when computing the MASE metric shchur (2970)
- Check the AutoGluon version when loading `TimeSeriesPredictor` from disk shchur (3233)

Minor Improvements / Documentation / Bug Fixes
* Update documentation and tutorials shchur (2960, 2964, 3296, 3297)
* General bug fixes and improvements shchur (2977, 3058, 3066, 3160, 3193, 3202, 3236, 3255, 3275, 3290)

Exploratory Data Analysis (EDA) tools
In 0.8 we introduce a few new tools to help with data exploration and feature engineering:
* **Anomaly Detection** gradientsky (3124, 3137) - helps to identify unusual patterns or behaviors in data that deviate significantly from the norm. It's best used when finding outliers, rare events, or suspicious activities that could indicate fraud, defects, or system failures. Check the [Anomaly Detection Tutorial](https://auto.gluon.ai/stable/tutorials/eda/eda-auto-anomaly-detection.html) to explore the functionality.
* **Partial Dependence Plots** gradientsky (3071, 3079) - visualize the relationship between a feature and the model's output for each individual instance in the dataset. Two-way variant can visualize potential interactions between any two features. Please see this tutorial for more detail: [Using Interaction Charts To Learn Information About the Data](https://auto.gluon.ai/stable/tutorials/eda/eda-auto-analyze-interaction.html#using-interaction-charts-to-learn-information-about-the-data)
Bug Fixes / Code and Doc Improvements
* Switch regression analysis in `quick_fit` to use residuals plot gradientsky (3039)
* Added `explain_rows` method to `autogluon.eda.auto` - Kernel SHAP visualization gradientsky (3014)
* General improvements and fixes gradientsky (2991, 3056, 3102, 3107, 3138)

Page 1 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.