Fixes critical bugs
---
ActivationMaximization
We've fixed **a problem of unstable gradient calculation** in ActivationMaximization. In addition, because the related implementation has a bad effect on the process with the mixed-precision model, as a result, the problems related to mixed-precision with ActivationMaximization below were also fixed.
* Fixed issues related to mixed-precision
* The results of fully-precision and mixed-precision models are different.
* When the model has a layer which is set explicitly as float32 dtype, ActivationMaximization might raise an error.
* Regularization values calculated by ActivationMaximization might be `NaN` or `inf` easily.
Because the results of the gradients calculation are now different compared to the past versions, to keep compatibility, we newly provide the module `tf_keras_vis.activation_maximization.legacy`. If you have the code adjusted by yourself in the past versions, you could also use legacy implementation as follows:
python
from tf_keras_vis.activation_maximization import ActivationMaximization
from tf_keras_vis.activation_maximization.legacy import ActivationMaximization
Please notice that the `tf_keras_vis.activation_maximization.legacy` module above still has the problem of unstable gradient calculation. So we strongly recommend, if you don't have any code adjusted by yourself in the past versions, using the `tf_keras_vis.activation_maximization` module.
Regularization for ActivationMaximization
We also found and fixed some bugs of Regularizers below.
* Fixed issues related to Regularizers
* The `TotalVariation2D` has a problem that the more the number of samples of `seed_input`, the smaller the regularization value of it.
* The `Norm` has a problem that the larger the spatial size of `seed_input`, the smaller the regularization value of it.
In addition to above, we've changed the signature of `Regularizer__call__()`. The method now accepts only one seed_input (the legacy one accepts whole seed_inputs). With this change, the `regularizers` argument of `ActivationMaximization__call__()` now accepts a dictionary object that contains the Regularizer instances for each model input.
To keep compatibility, we've newly provided the `tf_keras_vis.activation_maximization.regularizers` module that includes the regularizers improved, instead of updating the `tf_keras_vis.utils.regularizers` module. If you have the code implemented or adjusted by yourself in the past versions, you could also use legacy implementation as follows:
python
from tf_keras_vis.activation_maximization.regularizers import Norm, TotalVariation2D
from tf_keras_vis.utils.regularizers import Norm, TotalVariation2D
Please notice that the `tf_keras_vis.utils.regularizers` module still has the bugs and a lot of warnings will be printed. So we strongly recommend, if you do NOT have any code adjusted by yourself in the past versions, using the `tf_keras_vis.utils.regularizers` module.
If you face any problem related to this release, please feel free to ask us in [Issues page](https://github.com/keisen/tf-keras-vis/issues).
Add features and Improvements
---
* Add `tf_keras_vis.utils.model_modifiers` module.
* To fix issues / 49
* This module includes `ModelModifier`, `ReplaceToLinear`, `ExtractIntermediateLayer` and `GuidedBackpropagation`.
* As a result, `model_modifier` argument of `tf_keras_vis.ModelVisualization__init__()` now also accepts a `tf_keras_vis.utils.model_modifiers.ModelModifier` instance, a list of `Callable` objects or `ModelModifier` instances.
* Add `tf_keras_vis.gradcam_plus_plus` module.
* This module includes `GradcamPlusPlus`.
* Add `tf_keras_vis.activation_maximization.legacy` module.
* This module includes `ActivationMaximization` that still has the problem of unstable gradient calculation.
* Add `tf_keras_vis.activation_maximization.input_modifiers` module.
* This module includes `Jitter`, `Rotate` and `Scale`.
* Add `tf_keras_vis.activation_maximization.regularizers` module.
* This module includes `TotalVariation2D` and `Norm` that fixed some bugs.
* Add `Scale`, that is the new InputModifier class, to the `tf_keras_vis.activation_maximization.input_modifiers` module.
* Add `Progress`, that is the new Callback class, to the `tf_keras_vis.activation_maximization.callbacks` module.
* Add `activation_modifiers` argument to `ActivationMaximization__call__()`.
* ~~Add a github actions recipe to publish tf-keras-vis to Anaconda.org~~
* To fix issues / 54
* Improve Scorecam
* Fixes the incorrect weight calculation. (Reducing noise)
* Change cubic interpolation to linear one. (10x faster)
* Change to apply softmax function to scores. (More stable)
* Add validation to check invalid scores.
Breaking Changes
---
* In all visualization, the `score` argument now must be a list of `tf_keras_vis.utils.scores.Score` instances or Callable objects when the model has multiple outputs.
* Change the default parameters of `ActivationMaximization__call__()`.
* Because of fixing critical bugs in `ActivationMaximization` that the calculation of gradient descent is unstable.
* Deprecates `tf_keras_vis.utils.regularizers` module, Use `tf_keras_vis.activation_maximization.regularizers` module instead.
* For now, both current and legacy regularizers can be used in ActivationMaximization, but please notice that they can't be mixed to use.
* Deprecates `tf_keras_vis.utils.input_modifiers`, Use `tf_keras_vis.activation_maximization.input_modifiers` module instead.
* Deprecates `tf_keras_vis.activation_maximization.callbacks.PrintLogger`, use `Progress` instead.
* Add `**arguments` argument to `Callbackon_begin()`.
* `**arguments` is the values passed to `ActivationMaximization__call__()` as arguments.
* Deprecates `tf_keras_vis.gradcam.GradcamPlusPlus`, Use `tf_keras_vis.gradcam_plus_plus.GradcamPlusPlus` module instead.
Bugfixes and Other Changes
---
* Fixes a bug that Scorecam didn't work correctly with multi-inputs model.
* Fixes some bugs when loading input modifiers.
* Fixes a bug that `Callbackon_end()` might NOT be called when an error occurs.
* Improve an error message when `max_N` is invalid in `Scorecam`.
* Improve the `input_range` argument of `ActivationMaximization__call__()` to raise an error when it's invalid.
* Change docstring style to `google`.
* Replace `strformat()` to `f-string`