Change a terminology
---
* The `Loss` is changed to `Score`. Because visualizations does NOT need to calculate any `loss` between labels and the model outputs, and the calculated values is used as just scores, we thought that the former is proper than latter.
Add Features and Improvements
---
* Support Python 3.9
* Support Tensorflow 2.4 and 2.5
* Support mixed-precision
* Issues / 43 , 45 and 47
* only tensorflow 2.4.0+
* Add `unconnected_gradients` option to `__call__()` of ActivationMaximization, Saliency, GradCAM, GradCAM++.
* Add `standardize_cam` option to `__call__()` of GradCAM, GradCAM++ and ScoreCAM.
* Add `normalize_saliency` option to `__call__()` of Saliency.
Breaking Changes
---
* In all visualization class constructor, the `model` passed as a argument is NOT cloned when `model_modifier` is None.
* issues / 51
* Deprecates and Disable `normalize_gradient` option in ActivationMaximizaion and GradCAM.
* Deprecates `tf_keras_vis.utils.callback` module. Use `tf_keras_vis.activation_maximization.callbacks` module instead.
* Deprecates and Rename `Print` to `PrintLogger`.
* Deprecates and Rename `GifGenerator` to `GifGenerator2D`.
* Deprecates `tf_keras_vis.utils.regularizers.TotalVariation`. Use `tf_keras_vis.utils.regularizers.TotalVariation2D` instead.
* Deprecates `tf_keras_vis.utils.regularizers.L2Norm`. Use `tf_keras_vis.utils.regularizers.Norm` instead.
* Deprecates and Rename `tf_keras_vis.utils.normalize` to `tf_keras_vis.utils.standardize`.
* Don't need to use `tf_keras_vis.utils.normalize` to visualize CAM or Saliency. Use `standardize_cam` and `standardize_saliency` option instead respectively.
* Don't need to cast activations maximized by ActivationMaximization to visualize.
* See [the example of ActivationMaximization](https://github.com/keisen/tf-keras-vis/blob/master/examples/visualize_conv_filters.ipynb) for details.
BugFix and Other Changes
---
* Fixes a problem in Rotate input-modifier that it can't work correctly when input tensors is not 2D images.
* Add a test utility and testcases.
* Update dockerfiles and example notebooks.
Known Issues
---
* With a `mixed-precision` model, Regurarization values that is calculated by ActivationMaximization may be NaN.
* With a `mixed-precision` model that has a layer which are set float32 dtype exlicitly, ActivationMaximization may raise a error.