Pytorch-widedeep

Latest version: v1.5.1

Safety actively analyzes 630130 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 4

1.0.5

The two main additions to the library are:

- SAINT from [SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training](https://arxiv.org/abs/2106.01342) and
- FT-Transformer from [Revisiting Deep Learning Models for Tabular Data](https://arxiv.org/abs/2106.11959)


In addition

- New DataLoader for imbalanced dataset. See [here](https://github.com/jrzaurin/pytorch-widedeep/blob/master/pytorch_widedeep/dataloaders.py).
- Integration with [torchmetrics](https://torchmetrics.readthedocs.io/en/latest/).

1.0.0

This release represents a major step forward for the library in terms of functionalities and flexibility:

1. Ported TabNet from the fantastic implementation of the guys at [dreamquark-ai](https://github.com/dreamquark-ai/tabnet).
2. Callbacks are now more flexible and save more information.
3. The `save` method in the `Trainer` is more flexible and transparent
4. The library has extensively been tested via experiments against `LightGBM` (see [here](https://towardsdatascience.com/pytorch-widedeep-deep-learning-for-tabular-data-iv-deep-learning-vs-lightgbm-cadcbf571eaf?source=your_stories_page-------------------------------------))

0.4.8

This release represents an almost-complete refactor of the previous version and I consider the code in this version well tested and production-ready. The main reason why this release is not v1 is because I want to use it with a few more datasets, but at the same time I want the version to be public to see if others use it. Also, I want the changes from the last Beta and v1 to be not too significant.

**This version is not backwards compatible (at all).**

These are some of the structural changes:

* Building of the model and training the model and now completely decoupled
* Added the `TabTransformer` as a potential `deeptabular` component
* Renamed many of the parameters so that they are consistent between models
* Added the possibility of customising almost every single component: model component, losses, metrics and callbacks
* Added R2 metrics for regression problems

0.4.7

The treatment of the image datasets in `WideDeepDataset` replicates that of `Pytorch`. In particular this source code:

if isinstance(pic, np.ndarray):
handle numpy array
if pic.ndim == 2:
pic = pic[:, :, None]

In addition, I have added the possibility of using each of the model components in isolation and independently. This is, one could now use the `wide`, `deepdense` (either `DeepDense` or `DeepDenseResnet`), `deeptext` and `deepimage` independently.

0.4.6

As suggested in issue 26 , I have added the possibility of the `deepdense` component that receives the embeddings from categorical columns and the continuous columns being a series of Dense ResNet blocks. This is all available via the class `DeepDenseResnet` and used identically than before:

python
deepdense = DeepDenseResnet(...)

model = WideDeep(wide=wide, deepdense=deepdense)


In addition, code coverage has increased to 91%

0.4.5

Also mentioning that the printed loss in the case of Regression is no longer RMSE but MSE. This is done for consistency with the metrics saved in the `History` callback.

**NOTE**: this does not change a thing in terms of how one would use the package. `pytorch-widedeep` can be used in the exact same way as previous versions. However, since the model components have changed, models generated with previous versions are not compatible with this version.

Page 3 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.