Pytorch-widedeep

Latest version: v1.5.1

Safety actively analyzes 630169 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 4

1.2.2

1. Fixed a bug related to the option of adding a FC head on top of the "backbone" models
2. Added a notebook to illustrate how one could use a Hugginface model along with any other model in the library

v.1.2.1
Simple minor release fixing the implementation of the additive attention (see 110 )

1.2.0

There are a number of changes and new features in this release, here is a summary:

1. Refactored the code related to the 3 forms of training in the library:
- Supervised Training (via the `Trainer` class)
- Self-Supervised pre-training: we have implemented two methods or routines for self-supervised pre-training. These are:
- Encoder-Decoder Pre-Training (via the `EncoderDecoderTrainer` class): this is inspired by the [TabNet paper](https://arxiv.org/abs/1908.07442)
- Constrastive-Denoising Pre-Training (via de `ConstrastiveDenoising` class): this is inspired by the [SAINT paper](https://arxiv.org/abs/2106.01342)
- Bayesian or Probabilistic Training (via the `BayesianTrainer`: this is inspired by the paper [Weight Uncertainty in Neural Networks

1.1.2

Simply Update all documentation

1.1.0

This release fixes some minor bugs but mainly brings a couple of new functionalities:

1. New experimental Attentive models, namely: `ContextAttentionMLP` and `SelfAttentionMLP`.
2. 2 Probabilistic models based on Bayes by Backprop (BBP) as described in [Weight Uncertainty in Neural Networks](https://arxiv.org/abs/1505.05424), namely: `BayesianTabMlp` and `BayesianWide`.

1.0.10

This minor release simply fixes issue 53 related to the fact that `SAINT`, the `FT-Transformer` and the `TabFasformer` failed when the input data had no categorical columns

1.0.9

**Functionalities**:

- Added a new functionality called `Tab2Vec` that given a trained model and a fitted Tabular Preprocessor it will return an input dataframe transformed into embeddings


**TabFormers: Increased the `Tabformer` (Transformers for Tabular Data) family**

- Added a proper implementation of the [FT-Transformer](https://arxiv.org/abs/2106.11959) with Linear Attention (as introduced in the [Linformer](https://arxiv.org/abs/2006.04768) paper)
- Added a TabFastFormer model, an adaptation of the [FastFormer](https://arxiv.org/abs/2108.09084) for Tabular Data
- Added a TabPerceiver model, an adaptation of the [Perceiver](https://arxiv.org/abs/2103.03206) for Tabular Data


**Docs**

- Refined the docs to make them cleared and fix a few typos

Page 2 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.