Skater

Latest version: v1.1.2

Safety actively analyzes 619504 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

1.1.2

* Support for Tree Surrogates for explanations(Experimental)
* Bug-fixes
* Updated notebooks examples
* Basic documentation update to reflect new changes(more work needs to be done there)
* Other bug-fixes and improvements

1.1.1b3

* Simplified Installation to handle dependencies better
* Convenience functions to support flipping image orientation `flip_orientation`
* Updated examples on how to use skater with xgboost
* Enabled support for Image Inference using Occlusion with the ability to specify window_size and step_size for perturbing feature space
* Other minor bug fixes and code clean-ups

Credits:
To all contributors who have helped in moving the project forward.

1.1.1b2

New Features:
* Added new interface skater.core.local_interpretation.dnni.deep_interpreter.DeepInterpreter for interpreting tensorflow and Keras based models
* enabling support for interpreting DNNs using gradient-based e-LRP and Integrated Gradient through DeepInterpreter.explain
* added support to visualize relevance/attribution scores for interpreting image and text inputs
* skater.core.visualizer.image_relevance_visualizer.visualize
* skater.core.visualizer.text_relevance_visualizer import build_visual_explainer, show_in_notebook
* user-friendly Utility functions to generate simple yet effective conditional adversarial examples for image
inputs
* skater.util.image_ops import load_image, show_image, normalize, add_noise, flip_pixels, image_transformation
* skater.util.image_ops import in_between, greater_than, greater_than_or_equal
* More interactive notebook use-cases for building and interpreting DNNs for evaluating model stability/identifying blind spots
* Updates to documentation, https://datascienceinc.github.io/Skater/overview.html
* New section summarizing Notebook examples https://datascienceinc.github.io/Skater/gallery.html
* Other bug fixes

Credits:
* Special thanks to Marco Ancona(marcoancona) for guiding in enabling this feature within Skater.
* Thanks to all other contributors for helping move the library forward every day.

1.1.0

* This is a follow-up release on experimental support for building rule-based models to enable interpretability
both at the Global and Local scope.
* More improved docs with a section on Jupyter Notebook highlighting different supported algorithms.

More improvement in planned in the subsequent release. Stay tuned.

1.1.0b1

1. Skater till now has been an interpretation engine to enable post-hoc model evaluation and interpretation. With this PR Skater starts its journey to support interpretable models.
Rule List algorithms are highly popular in the space of Interpretable Models because the trained models are represented as simple decision lists. In the latest release, we enable support for Bayesian Rule Lists(BRL).
The probabilistic classifier( estimating P(Y=1|X) for each X ) optimizes the posterior of a Bayesian hierarchical model over the pre-mined rules.

Usage Example:

from skater.core.global_interpretation.interpretable_models.brlc import BRLC
import pandas as pd
from sklearn.datasets.mldata import fetch_mldata
input_df = fetch_mldata("diabetes")
...
Xtrain, Xtest, ytrain, ytest = train_test_split(input_df, y, test_size=0.20, random_state=0)

sbrl_model = BRLC(min_rule_len=1, max_rule_len=10, iterations=10000, n_chains=20, drop_features=True)
Train a model, by default discretizer is enabled. So, you wish to exclude features then exclude them using
the undiscretize_feature_list parameter
model = sbrl_model.fit(Xtrain, ytrain, bin_labels="default")

2. Other minor bug fixes and documentation update

Credits:
Special thanks to Professor Cynthia Rudin, Hongyu Yang and tmadl(Tamas Madl) for helping enable this feature.

1.0.3

This release includes:
* Various bug fixes and performance improvements
* new feature importance calculation method
* introduction of model scorers
* model types can now be determined explicitly


Model Scoring
Now, after you create a Skater model with:


model = InMemoryModel(predict_fn, examples=examples, model_type="classifier")


The model object now provides a .scorers api, which allows you to store predictions against training labels. Based on whether your model is a regressor, classifier that returns labels, or classifier that returns probabilities, scorers will automatically expose various scoring algorithms specific to your model. For instance, in the example above, we could do:

model.scorers.f1(labels, model(X))
model.scorers.cross_entropy(labels, model(X))


if it were a regression, we could do:


model.scorers.mse(labels, model(X))


Calling model.scorers.default(labels, model(X)) or simply model.scorers(labels, model(X)) will execute the default scorer for your model, which are:

regression: mean absolute error
classifier (probabilities): cross entropy
classifier (labels): f1

Let us know if you'd like more scorers, or even better, feel free to make a PR to add more yourself!


Feature Importance Calculation
The default method of computing feature importance is done by perturbing each feature, and observing how much those perturbations affect predictions.

With the addition of model scoring, we now also provide a method based on observing changes in model scoring functions; the less accurate your model becomes based on perturbing a feature, the more important it is.

To enable scoring based feature importance, you must load training labels into your interpretation object, like:

interpreter = Interpretation(training_data=training_data, training_labels=training_labels)
interpreter.feature_importance.plot_feature_importance(model, method='model-scoring')


Explicit Model Types

Originall Skater tried to infer the type of your model based on the types of predictions it made. Now when you create a model, you can define these explicitely with `model_type` and `probability` keyword arguments to skater model types:


model = InMemoryModel(predict_fn, model_type='classifier', probability=True)

or

model = InMemoryModel(predict_fn, model_type='regressor')

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.