Pytorch-widedeep

Latest version: v1.5.1

Safety actively analyzes 621706 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 4

2102.09554

4. Better integration with `torchvision` for the `deepimage` component of a `WideDeep` model
5. 3 Available models for the `deeptext` component of a `WideDeep` model. Namely: `BasicRNN`, `AttentiveRNN` and `StackedAttentiveRNN`

2006.16236

3. Revisited and polished the docs

1505.05424

Just as a reminder, the current deep learning models for tabular data available in the library are:
- Wide
- TabMlp
- TabResNet
- [TabNet](https://arxiv.org/abs/1908.07442)
- [TabTransformer](https://arxiv.org/abs/2012.06678)
- [FTTransformer](https://arxiv.org/abs/2106.11959v2)
- [SAINT](https://arxiv.org/abs/2106.01342)
- [TabFastformer](https://arxiv.org/abs/2108.09084)
- [TabPerceiver](https://arxiv.org/abs/2103.03206)
- BayesianWide
- BayesianTabMlp

2. The text related component has now 3 available models, all based on RNNs. There are reasons for that although the integration with the Hugginface Transformer library is the next step in the development of the library. The 3 models available are:
- BasicRNN
- AttentiveRNN
- StackedAttentiveRNN

The last two are based on [Hierarchical Attention Networks for Document Classification](https://www.cs.cmu.edu/~hovy/papers/16HLT-hierarchical-attention-networks.pdf). See the docs for details

3. The image related component is now fully integrated with the latest [torchvision](https://pytorch.org/vision/stable/models.html) release, with a new [Multi-Weight Support API](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/). Currently, the model variants supported by our library are:
- resnet
- shufflenet
- resnext
- wide_resnet
- regnet
- densenet
- mobilenet
- mnasnet
- efficientnet
- squeezenet

1.5.1

Mostly fixed issue 204

v.1.5.0
Added two new embedding methods for numerical features described in [On Embeddings for Numerical Features in Tabular Deep Learning](https://arxiv.org/abs/2203.05556) and adjusted all models and functionalities accordingly


v.1.4.0
This release mainly adds the functionality to be able to deal with large datasets via the `load_from_folder` module.

This module is inspired by the `ImageFolder` class in the `torchvision` library but adapted to the needs of our library. See the docs for details.


v.1.3.2
1. Added [Flash Attention](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html)

1.3.1

1. Added example scripts and notebooks on how to use the library in the context of recommendation systems using [this notebook](https://www.kaggle.com/code/matanivanov/wide-deep-learning-for-recsys-with-pytorch) as example. This is a response to issue #133
2. Used the opportunity to add the movielens 100k dataset to the library, so that now it can be imported from the datasets module
3. Added a simple (not pre-trained) transformer model to to the text component
4. Added citation file
5. Fix a bug regarding the padding index not being 1 when using the fastai transforms

1.3.0

* Added a new functionality to access feature importance via attention weights for all DL models for Tabular data except for the `TabPerceiver`. This functionality is accessed via the `feature_importance` attribute in the trainer (computed during training with a sample of observations) and at predict time via de `explain` method.
* Fix all restore weights capabilities in all forms of training. Such capabilities are present in two callbacks, the `EarlyStopping` and the `ModelCheckpoint` Callbacks. Prior to this release there was a bug and the weights were not restored.

joss_paper_package_version_v1.2.0

Page 1 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.