Pytorch-forecasting

Latest version: v1.0.0

Safety actively analyzes 629811 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 6

0.5.3

Fixes

- Fix issue where hyperparameter verbosity controlled only part of output (118)
- Fix occasional error when `.get_parameters()` from `TimeSeriesDataSet` failed (117)
- Remove redundant double pass through LSTM for temporal fusion transformer (125)
- Prevent installation of pytorch-lightning 1.0.4 as it breaks the code (127)
- Prevent modification of model defaults in-place (112)

---

0.5.2

Added

- Hyperparameter tuning with optuna to tutorial
- Control over verbosity of hyper parameter tuning

Fixes

- Interpretation error when different batches had different maximum decoder lengths
- Fix some typos (no changes to user API)

---

0.5.1

This release has only one purpose: Allow usage of PyTorch Lightning 1.0 - all tests have passed.

---

0.5.0

Added

- Additional checks for `TimeSeriesDataSet` inputs - now flagging if series are lost due to high `min_encoder_length` and ensure parameters are integers
- Enable classification - simply change the target in the `TimeSeriesDataSet` to a non-float variable, use the `CrossEntropy` metric to optimize and output as many classes as you want to predict

Changed

- Ensured PyTorch Lightning 0.10 compatibility
- Using `LearningRateMonitor` instead of `LearningRateLogger`
- Use `EarlyStopping` callback in trainer `callbacks` instead of `early_stopping` argument
- Update metric system `update()` and `compute()` methods
- Use `Tuner(trainer).lr_find()` instead of `trainer.lr_find()` in tutorials and examples
- Update poetry to 1.1.0

---

0.4.1

Fixes

Model

- Removed attention to current datapoint in TFT decoder to generalise better over various sequence lengths
- Allow resuming optuna hyperparamter tuning study

Data

- Fixed inconsistent naming and calculation of `encoder_length`in TimeSeriesDataSet when added as feature

Contributors

- jdb78

---

0.4.0

Added

Models

- Backcast loss for N-BEATS network for better regularisation
- logging_metrics as explicit arguments to models

Metrics

- MASE (Mean absolute scaled error) metric for training and reporting
- Metrics can be composed, e.g. `0.3* metric1 + 0.7 * metric2`
- Aggregation metric that is computed on mean prediction over all samples to reduce mean-bias

Data

- Increased speed of parsing data with missing datapoints. About 2s for 1M data points. If `numba` is installed, 0.2s for 1M data points
- Time-synchronize samples in batches: ensure that all samples in each batch have with same time index in decoder

Breaking changes

- Improved subsequence detection in TimeSeriesDataSet ensures that there exists a subsequence starting and ending on each point in time.
- Fix `min_encoder_length = 0` being ignored and processed as `min_encoder_length = max_encoder_length`

Contributors

- jdb78
- dehoyosb

---

Page 4 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.