Pytorch-forecasting

Latest version: v1.0.0

Safety actively analyzes 629765 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 6

0.9.1

Added

- Use target name instead of target number for logging metrics (588)
- Optimizer can be initialized by passing string, class or function (602)
- Add support for multiple outputs in Baseline model (603)
- Added Optuna pruner as optional parameter in `TemporalFusionTransformer.optimize_hyperparameters` (619)
- Dropping support for Python 3.6 and starting support for Python 3.9 (639)

Fixed

- Initialization of TemporalFusionTransformer with multiple targets but loss for only one target (550)
- Added missing transformation of prediction for MLP (602)
- Fixed logging hyperparameters (688)
- Ensure MultiNormalizer fit state is detected (681)
- Fix infinite loop in TimeDistributedEmbeddingBag (672)

Contributors

- jdb78
- TKlerx
- chefPony
- eavae
- L0Z1K

0.9.0

Breaking changes

- Removed `dropout_categoricals` parameter from `TimeSeriesDataSet`.
Use `categorical_encoders=dict(<variable_name>=NaNLabelEncoder(add_nan=True)`) instead (518)
- Rename parameter `allow_missings` for `TimeSeriesDataSet` to `allow_missing_timesteps` (518)
- Transparent handling of transformations. Forward methods should now call two new methods (518):

- `transform_output` to explicitly rescale the network outputs into the de-normalized space
- `to_network_output` to create a dict-like named tuple. This allows tracing the modules with PyTorch's JIT. Only `prediction` is still required which is the main network output.

Example:

python
def forward(self, x):
normalized_prediction = self.module(x)
prediction = self.transform_output(prediction=normalized_prediction, target_scale=x["target_scale"])
return self.to_network_output(prediction=prediction)


Fixed

- Fix quantile prediction for tensors on GPUs for distribution losses (491)
- Fix hyperparameter update for RecurrentNetwork.from_dataset method (497)

Added

- Improved validation of input parameters of TimeSeriesDataSet (518)

0.8.5

Added

- Allow lists for multiple losses and normalizers (405)
- Warn if normalization is with scale `< 1e-7` (429)
- Allow usage of distribution losses in all settings (434)

Fixed

- Fix issue when predicting and data is on different devices (402)
- Fix non-iterable output (404)
- Fix problem with moving data to CPU for multiple targets (434)

Contributors

- jdb78
- domplexity

0.8.4

Added

- Adding a filter functionality to the timeseries datasset (329)
- Add simple models such as LSTM, GRU and a MLP on the decoder (380)
- Allow usage of any torch optimizer such as SGD (380)

Fixed

- Moving predictions to CPU to avoid running out of memory (329)
- Correct determination of `output_size` for multi-target forecasting with the TemporalFusionTransformer (328)
- Tqdm autonotebook fix to work outside of Jupyter (338)
- Fix issue with yaml serialization for TensorboardLogger (379)

Contributors

- jdb78
- JakeForsey
- vakker

0.8.3

Added

- Make tuning trainer kwargs overwritable (300)
- Allow adding categories to NaNEncoder (303)

Fixed

- Underlying data is copied if modified. Original data is not modified inplace (263)
- Allow plotting of interpretation on passed figure for NBEATS (280)
- Fix memory leak for plotting and logging interpretation (311)
- Correct shape of `predict()` method output for multi-targets (268)
- Remove cloudpickle to allow GPU trained models to be loaded on CPU devices from checkpoints (314)

Contributors

- jdb78
- kigawas
- snumumrik

0.8.2

- Added missing output transformation which was switched off by default (260)

Page 2 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.