Changelogs » Ddsp

PyUp Safety actively tracks 282,845 Python packages for vulnerabilities and notifies you when to upgrade.



Major (breaking) change
  * New base classes
  * `ProcessorGroup` and `LossGroup` inherit from `DAGLayer` (
  * All layers [decoders, encoders, preprocessors] inherit from `DictLayer` (
  * Renamed classes to more precise terms
  * `Additive` -> `Harmonic`
  * `DefaultPreprocessor` -> `F0LoudnessPreprocessor`
  * `TranscribingAutoencoder` -> `Inverse Synthesis`
  * New experimental `MidiAutoencoder` model (WIP)
  * `Evaluator` classes in eval_util (now configurable from gin instead of a big long series of if statements)
  * Minor bug fixes


* Cloud training scripts
  * Model API refactor (no more `model.get_controls()`, `model()` now returns a dictionary of output tensors instead of audio. Audio can be retrieved with `model.get_audio_from_outputs(outputs)`
  * Separate files for each model
  * Minor bug fixes




Release for reproducing the results from the 2020 ICML SAS workshop paper (
  WIP code from the paper added with EXPERIMENTAL disclaimers.
  Gin configs and details provided in `ddsp/training/gin/papers/icml2020`
  * Custom cumsum operation to avoid phase accumulation errors for generating long sequences.
  * Script to automatically update old gin configs.


Add custom cumsum function that doesn't accumulate phase errors like tf.cumsum.


* Updated pitch detection metrics (RPA, RCA)
  * Sinusoidal Synthesizer
  * Warm starting models (model_dir -> save_dir, restore_dir)


Small fixes to bugs introduced by refactor :).


Some bug fixes and a refactor of train_util and eval_util.


* New data normalization in the demo colab notebooks.
  * Tiny model config.
  * Most (but not all) of the variable sample rate PRs.
  * Tests and bug fixes.


Simplify and refactor RnnFcDecoder.
  * Requires old models to add a single line to their operative gin configs, or --gin_param, `RnnFcDecoder.input_keys = ('f0_scaled', 'ld_scaled')`


* Models now use self._loss_dict to keep track of losses, and not the built-in keras self.losses (so that we can keep track of each loss name without needing a synced parallel list).


* Allow memory growth flag for GPUs with less memory.
  * Use latest CREPE
  * Remove custom TPU cumsum function
  * Bug fixes to colab
  * Compare f0 predictions with f0 ground truth
  * Creating datasets with different sample rates


Update code to use tensorflow 2 and python 3.


Code used in the initial ICLR 2020 paper ( `ddsp/` works for tf1 and tf2, while `ddsp/training/` is written with the tf1 Estimator API.