tensorflow / tcav
Code for the TCAV ML interpretability project
☆639Updated 9 months ago
Alternatives and similar repositories for tcav:
Users that are interested in tcav are comparing it to the libraries listed below
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆973Updated last year
- Towards Automatic Concept-based Explanations☆159Updated last year
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆750Updated 4 years ago
- 🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations☆114Updated 5 years ago
- Attributing predictions made by the Inception network using the Integrated Gradients method☆624Updated 3 years ago
- Source code/webpage/demos for the What-If Tool☆946Updated 7 months ago
- Interpretability Methods for tf.keras models with Tensorflow 2.x☆1,025Updated 11 months ago
- Tensorflow's Fairness Evaluation and Visualization Toolkit☆349Updated last week
- Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers☆98Updated 6 years ago
- The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Py…☆332Updated 2 years ago
- Code for "High-Precision Model-Agnostic Explanations" paper☆802Updated 2 years ago
- Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet☆620Updated 2 years ago
- Public facing deeplift repo☆853Updated 3 years ago
- ☆51Updated 4 years ago
- ☆916Updated 2 years ago
- A toolbox to iNNvestigate neural networks' predictions!☆1,295Updated 3 weeks ago
- ☆314Updated 2 years ago
- Code for the Proceedings of the National Academy of Sciences 2020 article, "Understanding the Role of Individual Units in a Deep Neural N…☆303Updated 4 years ago
- Detect model's attention☆165Updated 4 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆598Updated 2 months ago
- Understanding Deep Networks via Extremal Perturbations and Smooth Masks☆345Updated 4 years ago
- ☆134Updated 5 years ago
- PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation☆336Updated 3 years ago
- A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.☆564Updated last year
- ☆592Updated last year
- Compute receptive fields of your favorite convnets☆439Updated 3 years ago
- A toolkit that streamlines and automates the generation of model cards☆432Updated last year
- Code to create Stylized-ImageNet, a stylized version of standard ImageNet (ICLR 2019 Oral)☆513Updated 2 weeks ago
- Bias Auditing & Fair ML Toolkit☆715Updated last month
- ☆100Updated 7 years ago