Aix360

Latest version: v0.3.0

Safety actively analyzes 619000 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

2306.06473

2. **Rule Induction algorithms:** Under Rule Induction category, the RIPPER explainer (repeated incremental pruning to produce error reduction) computes IF-THEN-ELSE rules from labeled data similar to decision trees but in disjunctive/conjunctive normal form. Also included is the "technical rule exchange format" (TRXF) for standardised representation of the algorithm outputs. The TRXF objects have methods to export rules into PMML format ready for use in downstream analytic applications that support it.
[Docs](https://aix360.readthedocs.io/en/latest/dise.html#aix360.algorithms.rule_induction.ripper.RipperExplainer) [Notebook](https://github.com/Trusted-AI/AIX360/blob/master/examples/rule_induction/ripper_demo.ipynb) [Paper](https://doi.org/10.1016/B978-1-55860-377-6.50023-2)

3. **Optimal Transport Matching:** In NLP applications, sentences/documents are represented as a distribution over the underlying tokens and optimal transport (OT) is commonly used to compute distances between two sentences in an unsupervised manner. OT takes two distributions, each representing a sentence, and computes a metric that represents their closeness. Additionally, OT returns a transport plan - a weight matrix representing how close a token in the target sentence is to a source token. The OTMatchingExplainer attempts to further explain the transport plan by generating alternative likely weight matrices. The explainer presents these additional plans to the user, so that a more suitable plan can be chosen to be the explanation if the original OT plan is deemed unsatisfactory.

2110.07275

4. **Continued Fraction Nets (CoFrNets):** The CoFrNet explainer is a directly interpretable model which is inspired by continued fractions and is particularly suitable for tabular and text data.
[Docs](https://aix360.readthedocs.io/en/latest/die.html#aix360.algorithms.cofrnet.CoFrNet.CoFrNet_Explainer) [Notebook](https://github.com/Trusted-AI/AIX360/blob/master/examples/cofrnet/cofrnet_example.ipynb) [Paper](https://proceedings.neurips.cc/paper_files/paper/2021/hash/b538f279cb2ca36268b23f557a831508-Abstract.html)

5. **Nearest Neighbor Contrastive Explainer:** Nearest Neighbor Contrastive Explainer is a model-agnostic explanation method that provides exemplar based feasible or realizable contrastive instances for tabular data. For a given model, exemplar/representative dataset, and query point, it computes the closest point within the representative dataset, which has a different prediction compared to the query point (with respect to the model). The closeness metric is defined using an AutoEncoder and ensures a robust and faithful neighbourhood even in the case of high-dimensional feature space or noisy datasets. This explanation method can also be used in the model-free usecase, where the model predictions are replaced by (user provided) ground truth.
[Docs](https://aix360.readthedocs.io/en/latest/lbbe.html#aix360.algorithms.nncontrastive.nncontrastive.NearestNeighborContrastiveExplainer) [Notebook](https://github.com/Trusted-AI/AIX360/blob/master/examples/nncontrastive/nncontrastive_demo.ipynb)

6. **Grouped CE Explainer:** GroupedCE is a local, model-agnostic explainer that generates grouped Conditional Expectation (CE) plots for a given instance and a set of features (extension of classical individual conditional expectation to higher dimensions). The set of features can be either a subset of the input covariates defined by the user or the top K features based on the importance provided by a global explainer. The explainer produces 3D plots, containing the model output when pairs of features vary simultaneously. If a single feature is provided the explainer produces the standard 2D ICE plots, where only one feature is perturbed at a time.
[Docs](https://aix360.readthedocs.io/en/latest/lbbe.html#aix360.algorithms.gce.gce.GroupedCEExplainer) [Notebook](https://github.com/Trusted-AI/AIX360/blob/master/examples/gce/gce_demo.ipynb)

7. **Time Series Explainability Algorithms**
The current version of the toolkit has been expanded to support time series data which occurs in numerous application domains such as asset management and monitoring, supply chain, finance, and IoT. The toolkit includes the following new time series explainability algorithms: TSSaliencyExplainer, TSLimeExplainer, and TSICEExplainer.

- **TSSaliencyExplainer TimeSeries Saliency (TSSaliency) Explainer:** TSSaliency implements a model agnostic integrated gradient method for time series prediction models. An integrated gradient map is an axiomatic saliency measure obtained by integrating model sensitivity (gradient) over a path from a base signal to the target signal. In the time series context, the base signal is a constant signal with average strength (for each variate). The sample paths are generated using a convex (affine) combination of the base signal and the target signal. The gradient computation uses Zeroth order Monte Carlo sampling.
[Docs](https://aix360.readthedocs.io/en/latest/tslbbe.html#aix360.algorithms.tssaliency.tssaliency.TSSaliencyExplainer) [Notebook (univariate & multivariate)](https://github.com/Trusted-AI/AIX360/blob/master/examples/tssaliency/)

- **TSICEExplainer:** TSICE generalises the ICE (Individual Conditional Expectation) algorithm for time series data. The traditional ICE algorithm uses independent feature variations (varying one feature while fixing others) to analyze the effect of a feature on the model's predictions. The independence assumption does not hold true for time series data. TSICE uses derived features which are computed from a group of observations over a contiguous time range. Rather than an independent exploration of features, TSICE explores the feature space via structured time series perturbations that do not violate the correlational structure within the data. These perturbations results in multiple instances of the time series data on which forecasts are produced. TSICE produces two explanations: (1) explanation using the perturbations around the selected time window and variation in the forecast. 2) explanation using the derived features, and variation in the model response from the base response.
[Docs](https://aix360.readthedocs.io/en/latest/tslbbe.html#aix360.algorithms.tsice.tsice.TSICEExplainer) [Notebook](https://github.com/Trusted-AI/AIX360/blob/master/examples/tsice/tsice_demo.ipynb)

- **TimeSeries Local Interpretable Model Agnostic (TSLime) Explainer:** TSLIME is a generalisation of the popular LIME explainability algorithm and computes local model-agnostic explanations for predictions based on time series data. TSLime utilizes time series perturbations techniques and explains the behaviour of a model with respect to a time series sample by fitting a linear surrogate model on those perturbations.
[Docs](https://aix360.readthedocs.io/en/latest/tslbbe.html#aix360.algorithms.tslime.tslime.TSLimeExplainer) [Notebook (univariate and multivariate)](https://github.com/Trusted-AI/AIX360/tree/master/examples/tslime)


**Selective Installation of Explainability Algorithms and upgrade to Python 3.10**

To expedite the addition of new algorithms and avoid conflicts of package dependencies across algorithms, the toolkit now supports selective installation of algorithms. The installation instructions are available [here](https://github.com/Trusted-AI/AIX360/tree/master#installation). As an example, after cloning the repository, one can install a subset of algorithms as `pip install -e .[rbm,dipvae,tssaliency]` . Most algorithms are now compatible with Python 3.10.

0.3.0

**New explainability algorithms and time series support**

1. **Interpretable Model Differencing (IMD):** The IMD explainer computes rules that can explain the scenarios where two (black-box) classifiers which are trained to do the same task produce different outcomes. The explainer generates a visualisation (joint surrogate tree) to navigate through the commonalities and differences between two ML models in an intuitive manner. The IMD algorithm works for tabular classification data from any domain with both continuous and categorical features. The explainer is model agnostic and requires only the training data and a prediction interface to access model outputs.

0.2.1

- Minor update to CEM parameters
- FeatureBinarizerFromTrees for Directly interpretable explainers
- Minor updates to BRCG due to Pandas update
- Updates to Heloc tutorial
- Abstraction class for global black box
- comment updates to protodash
- Minor bug fixes
- License updates

0.2.0

Links

Releases

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.