Changelogs » Ddsp

PyUp Safety actively tracks 282,845 Python packages for vulnerabilities and notifies you when to upgrade.

Ddsp

1.0.0

Major (breaking) change
  * New base classes
  * `ProcessorGroup` and `LossGroup` inherit from `DAGLayer` (dags.py)
  * All ddsp.training layers [decoders, encoders, preprocessors] inherit from `DictLayer` (nn.py)
  * Renamed classes to more precise terms
  * `Additive` -> `Harmonic`
  * `DefaultPreprocessor` -> `F0LoudnessPreprocessor`
  * `TranscribingAutoencoder` -> `Inverse Synthesis`
  * New experimental `MidiAutoencoder` model (WIP)
  * `Evaluator` classes in eval_util (now configurable from gin instead of a big long series of if statements)
  * Minor bug fixes

0.14.0

* Cloud training scripts
  * Model API refactor (no more `model.get_controls()`, `model()` now returns a dictionary of output tensors instead of audio. Audio can be retrieved with `model.get_audio_from_outputs(outputs)`
  * Separate files for each model
  * Minor bug fixes

0.13.0


        

0.12.0

Release for reproducing the results from the 2020 ICML SAS workshop paper (https://openreview.net/forum?id=RlVTYWhsky7).
  
  WIP code from the paper added with EXPERIMENTAL disclaimers.
  Gin configs and details provided in `ddsp/training/gin/papers/icml2020`
  
  v.0.10.0
  * Custom cumsum operation to avoid phase accumulation errors for generating long sequences.
  * Script to automatically update old gin configs.

0.8.0

Add custom cumsum function that doesn't accumulate phase errors like tf.cumsum.

0.7.0

* Updated pitch detection metrics (RPA, RCA)
  * Sinusoidal Synthesizer
  * Warm starting models (model_dir -> save_dir, restore_dir)

0.5.1

Small fixes to bugs introduced by refactor :).

0.5.0

Some bug fixes and a refactor of train_util and eval_util.

0.4.0

* New data normalization in the demo colab notebooks.
  * Tiny model config.
  * Most (but not all) of the variable sample rate PRs.
  * Tests and bug fixes.

0.2.0

Simplify and refactor RnnFcDecoder.
  
  * Requires old models to add a single line to their operative gin configs, or --gin_param, `RnnFcDecoder.input_keys = ('f0_scaled', 'ld_scaled')`

0.1.0

* Models now use self._loss_dict to keep track of losses, and not the built-in keras self.losses (so that we can keep track of each loss name without needing a synced parallel list).

0.0.10

* Allow memory growth flag for GPUs with less memory.
  * Use latest CREPE
  * Remove custom TPU cumsum function
  * Bug fixes to colab
  * Compare f0 predictions with f0 ground truth
  * Creating datasets with different sample rates

0.0.7

Update code to use tensorflow 2 and python 3.

0.0.6

Code used in the initial ICLR 2020 paper (https://openreview.net/forum?id=B1x1ma4tDr). `ddsp/` works for tf1 and tf2, while `ddsp/training/` is written with the tf1 Estimator API.