Pixyz

Latest version: v0.3.3

Safety actively analyzes 621008 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

0.3.3

New features

- Added `set_cache_maxsize`, a setter of `maxsize` for lru_cache, and `cache_maxsize`, a getter of `maxsize` for lru_cache. By increasing this `maxsize`, the size that can store the results of get_params (≒ forward of the Distribution class) becomes larger. This will speed up most VAE implementations!
- Added the mixture of experts (MoE)

Bug fix

- Fixed `get_params` for `replace_var`
- Fixed `maxsize` of lru_cache for `get_params`
- Fixed a major bug in PoE.

0.3.2

New features

- Added description of tutorial in readme. 167
- Added save and load methods to `Model` class. 161
- Added an option to replace the graphical model with a deterministic one. 169

Bug fix

- Fixed the Bernoulli Distribution so that the likelihood calculation of continuous values such as luminance is also available in the new version of pytorch.
- Fixed a bug where the option `feature_dims`, which determines whether tensor dimensions are added together as simultaneous log likelihoods, was not enabled. 163
- Resolved the issue where options set in `DistGraph.set_option` were overwritten by options in `Distribution.sample` and were not reflected. 162
- Avoided build failures with readthedocs. 170

0.3.1

New feature
- Added installation instructions through Docker Hub to Readme 157
- Allowed scalar type parameters of DistributionBase 159
Bug fix
- Reduced dependent libraries (tensorboardX, tqdm, torchvision) 156
- Fixed build error in travis CI and fixed travis badge link in Readme 156
- Fixed error in pytest (ignored tutorial directory) 156
- Typo fix of tutorials 160

0.3.0

New feature
- New field `distribution.graph` represents graphical model of the distribution. 119
- Supported drawing a grapical model by `networkx.draw_networkx(distribution.graph.visible_graph())`. 122
- Added tutorial notebooks. 148
Enabled memoization for calcuration graph of pixyz to speed up. 149

Changed API
- Renamed DataDistribution to EmpiricalDistribution. 146
- `timestep_var` is no longer specified by default (t is still the default in the display), which gives the time index when evaluating step loss or calling SliceStep. 142
- Corrected the error of the time index in the display. 142
- Reverted the TransformedDistribution arguments to their previous (=v0.1.4) format (Inference in TransformedDistribution internally uses the result of the sample / forward called immediately before). 139
- Removed the input_var argument of the Loss API. 150
- Added the ConstantVar class that sets the value of a variable before calling loss.eval. 150
- The argument order of var and cond_var in Distribution API has been unified. (Var first) 147
- Changed the output directory of example notebooks to make it easier to browse (and automatically installed sklearn with !pip). 141

Bug fix
- Eliminated of errors in pytorch 1.6 caused by flow's non-contiguous tensors. 140
- Fixed bug related to documents. 138
- Fixed a bug so that PoE does not cause an error when only one distribution is specified. 144
- Fixed a bug when specifying the same random variable as args of init in DistributionBase. 143
- Replaced `forward` call with `__call__` call to take advantage of pytorch.nn.module hook options. 136

0.2.1

Updates
- 133 Enabled to use time-specific step losses without extension of slice function
- 127, 130 Changed the output directory of example notebooks to make it easier to browse (and automatically installed sklearn with `!pip`)
- 126 Reverted the TransformedDistribution arguments to their previous format (use memoization to get the log-likelihood)

Bugfix
- 134 Elimination of errors in the latest version of pytorch caused by flow non-contiguous tensors
- 133 Fixed a bug related to timestep_var in Iterative Loss
- 132 Fixed a bug when specifying the same random variable as args of __init__ in DistributionBase
- 124 Fixed a bug so that PoE does not cause an error when only one distribution is specified

0.2.0

Interface change 🛠︎
Bug Fixes :bug:
Added a feature :new:

Distribution API
- 🛠︎Change the argument order of \_\_init\_\_ in the exponential distribution families and make the distribution parameters explicit 90
- 🛠︎Removed the sampling option from DistributionBase.set_dist (relaxed distribution families still have a sampling option) 108
- 🛠︎ TransformedDistribution will be the probability distribution of 2 variables of input and output of flow, and the return_all option of the conventional sample method is abolished. 115
- 🛠︎ Rename return_all option of InverseTransformedDistribution.sample to return_hidden 115
- 🛠︎ Added TransformedDistribution(or InverseTransformedDistribution ).sample with return_all option (that is, option that also returns random variables that are not involved). 115
- 🛠︎ TransformedDistribution.\_\_init\_\_ argument `var` is renamed to`flow_output_var`
- :new: Added Distribution.has_reparam property 93
- :new: Add return_all option to MixtureModel.sample 115
- :bug: Fixed a bug that unintentional overwrite of parameters of basic Distribution when torch.load 113

Loss API
- 🛠︎StochasticReconstructionLoss removed (need to explicitly configure this loss) 103
- 🛠︎Changed the base class of Loss to torch.nn.Module 100
- 🛠︎Renamed Loss.train to Loss.loss_train, and Loss.test to Loss.loss_test 100
- 🛠︎Renamed Loss._get_eval to Loss.forward 95
- 🛠︎Added Entropy method to switch entropy estimation by AnalyticalEntropy and MonteCarlo in options (deprecated Entropy class) 89
- 🛠︎Added a CrossEntropy method that can be switched as well (deprecated the CrossEntropy class) 89
- 🛠︎Added a KullbackLeibler method that can be switched as well (deprecated the KullbackLeibler class) 89
- 🛠︎ deprecated ELBO class and replaced it with ELBO method that returns a Loss instance 89
- :new: Support for Loss.detach method 93
- :new: Support for MinLoss, MaxLoss 95
- :new: Support for alternative loss `REINFORCE` to derive a policy gradient. 93
- :new: Loss support for DataParallel 100
- :new: Separated some features of the Loss class into the Divergence class 95
- :new: Added return_all option to Loss.eval (that is, when `return_dict = True`, you can also select whether to return unrelated random variables) 115
- :bug: Fixed a bug in IterativeLoss where the past value was conditioned on each step as the future value. 115
- :bug: Placed the parameter tensor of ValueLoss in nn.Module.device. 100
- :bug: Fixed incorrect argument checking in WassersteinDistance and MMDLoss. 103
- :bug: Fixed a bug in checking variables in Loss initialization 107
- :bug: Fixed a bug in IterativeLoss 107

Other
- :new: Add utils.lru_cache_for_sample_dict decorator that enables memoization with a function that takes a dictionary of random variables and their realization values ​​ 109
- :new: Renamed examples/vae_model to vae_with_vae_class
- :new: Added some exception messages 103
- :bug: Fixed jacobian calculation in flow/Preprocess 107
- :bug: Fixed a bug in some browsers that does not show the formula of readme.
- :bug: Alternate text is displayed when readme formula is not displayed. 117

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.