Tensorflow-model-analysis

Latest version: v0.46.0

Safety actively analyzes 629436 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 10

0.21.4

Major Features and Improvements

* Added support for creating metrics specs from tf.keras.losses.
* Added evaluation comparison feature to the Fairness Indicators UI in Colab.
* Added better defaults handling for eval config so that a single model spec
can be used for both candidate and baseline.
* Added support to provide output file format in load_eval_result API.

Bug fixes and other changes

* Fixed issue with keras metrics saved with the model not being calculated
unless a keras metric was added to the config.
* Depends on `pandas>=0.24,<2`.
* Depends on `pyarrow>=0.15,<1`.
* Depends on 'tfx-bsl>=0.21.3,<0.23',
* Depends on 'tensorflow>=1.15,!=2.0.*,<3',
* Depends on 'apache-beam[gcp]>=2.17,<2.18',

Deprecations

0.21.3

Major Features and Improvements

* Added support for model validation using either value threshold or diff
threshold.
* Added a writer to output model validation result (ValidationResult).
* Added support for multi-model evaluation using EvalSavedModels.
* Added support for inserting model_names by default to metrics_specs.
* Added support for selecting custom model format evals in config.

Bug fixes and other changes

* Fixed issue with model_name not being set in keras metrics.

Breaking changes

* Populate TDistributionValue metric when confidence intervals is enabled in
V2.
* Rename the writer MetricsAndPlotsWriter to MetricsPlotsAndValidationsWriter.

Deprecations

0.21.2

Major Features and Improvements

Bug fixes and other changes

* Adding SciPy dependency for both Python2 and Python3
* Increased table and tooltip font in Fairness Indicators.

Breaking changes

* `tfma.BinarizeOptions.class_ids`, `tfma.BinarizeOptions.k_list`,
`tfma.BinarizeOptions.top_k_list`, and `tfma.Options.disabled_outputs` are
now wrapped in an additional proto message.

Deprecations

0.21.1

Major Features and Improvements

* Adding a TFLite predict extractor to enable obtaining inferences from TFLite
models.

Bug fixes and other changes

* Adding support to compute deterministic confidence intervals using a seed
value in tfma.run_model_analysis API for testing or experimental purposes.
* Fixed calculation of `tfma.metrics.CoefficientOfDiscrimination` and
`tfma.metrics.RelativeCoefficientOfDiscrimination`.

Breaking changes

* Renaming k_anonymization_count field name to min_slice_size.

Deprecations

0.21.0

Major Features and Improvements

* Added `tfma.metrics.MinLabelPosition` and `tfma.metrics.QueryStatistics` for
use with V2 metrics API.
* Added `tfma.metrics.CoefficientOfDiscrimination` and
`tfma.metrics.RelativeCoefficientOfDiscrimination` for use with V2 metrics
API.
* Added support for using `tf.keras.metrics.*` metrics with V2 metrics API.
* Added support for default V2 MetricSpecs and creating specs from
`tf.kera.metrics.*` and `tfma.metrics.*` metric classes.
* Added new MetricsAndPlotsEvaluator based on V2 infrastructure. Note this
evaluator also supports query-based metrics.
* Add support for micro_average, macro_average, and weighted_macro_average
metrics.
* Added support for running V2 extractors and evaluators. V2 extractors will
be used whenever the default_eval_saved_model is created using a non-eval
tag (e.g. `tf.saved_model.SERVING`). The V2 evaluator will be used whenever
a `tfma.EvalConfig` is used containing `metrics_specs`.
* Added support for `tfma.metrics.SquaredPearsonCorrelation` for use with V2
metrics API.
* Improved support for TPU autoscaling and handling batch_size related
scaling.
* Added support for `tfma.metrics.Specificity`, `tfma.metrics.FallOut`, and
`tfma.metrics.MissRate` for use with V2 metrics API. Renamed `AUCPlot` to
`ConfusionMatrixPlot`, `MultiClassConfusionMatrixAtThresholds` to
`MultiClassConfusionMatrixPlot` and `MultiLabelConfusionMatrixAtThresholds`
to `MultiLabelConfusionMatrixPlot`.
* Added Jupyter support to Fairness Indicators. Currently does not support WIT
integration.
* Added fairness indicators metrics
`tfma.addons.fairness.metrics.FairnessIndicators`.
* Updated documentation for new metrics infrastructure and newly supported
models (keras, etc).
* Added support for model diff metrics. Users need to turn on "is_baseline" in
the corresponding ModelSpec.

Bug fixes and other changes

* Fixed error in `tfma-multi-class-confusion-matrix-at-thresholds` with
default classNames value.
* Fairness Indicators
- Compute ratio metrics with safe division.
- Remove "post_export_metrics" from metric names.
- Move threshold dropdown selector to a metric-by-metric basis, allowing
different metrics to be inspected with different thresholds. Don't show
thresholds for metrics that do not support them.
- Slices are now displayed in alphabetic order.
- Adding an option to "Select all" metrics in UI.
* Added auto slice key extractor based on statistics.
* Depends on 'tensorflow-metadata>=0.21,<0.22'.
* Made InputProcessor externally visible.

Breaking changes

* Updated proto config to remove input/output data specs in favor of passing
them directly to the run_eval.

Deprecations

0.15.4

Major Features and Improvements

Bug fixes and other changes

* Fixed the bug that Fairness Indicator will skip metrics with NaN value.

Breaking changes

Deprecations

Page 7 of 10

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.