Code for the TCAV ML interpretability project
☆653Feb 5, 2026Updated last month
Alternatives and similar repositories for tcav
Users that are interested in tcav are comparing it to the libraries listed below
Sorting:
- Towards Automatic Concept-based Explanations☆163May 1, 2024Updated last year
- Concept activation vectors for Keras☆14Mar 24, 2023Updated 2 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆54Mar 25, 2022Updated 3 years ago
- IBD: Interpretable Basis Decomposition for Visual Explanation☆52Nov 28, 2018Updated 7 years ago
- Quantitative Testing with Concept Activation Vectors in PyTorch☆43Mar 18, 2019Updated 7 years ago
- Invertible Concept-based Explanation (ICE)☆19Oct 29, 2025Updated 4 months ago
- ☆113Nov 21, 2022Updated 3 years ago
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆994Mar 20, 2024Updated 2 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆32Sep 25, 2019Updated 6 years ago
- ☆51Aug 29, 2020Updated 5 years ago
- Concept Bottleneck Models, ICML 2020☆246Feb 24, 2023Updated 3 years ago
- This repository contains the code for implementing Bidirectional Relevance scores for Digital Histopathology, which was used for the resu…☆16Mar 24, 2023Updated 2 years ago
- A collection of infrastructure and tools for research in neural network interpretability.☆4,703Feb 6, 2023Updated 3 years ago
- Showing the relationship between ImageNet ID and labels and pytorch pre-trained model output ID and labels☆10Oct 11, 2020Updated 5 years ago
- ☆123Mar 15, 2022Updated 4 years ago
- Network Dissection http://netdissect.csail.mit.edu for quantifying interpretability of deep CNNs.☆453Aug 25, 2018Updated 7 years ago
- Public facing deeplift repo☆873Apr 28, 2022Updated 3 years ago
- A toolbox to iNNvestigate neural networks' predictions!☆1,307Apr 11, 2025Updated 11 months ago
- DISSECT: Disentangled Simultaneous Explanations via Concept Traversals☆12Feb 5, 2024Updated 2 years ago
- PyTorch Transformer-based Language Model Implementation of ConceptSHAP☆14Jun 11, 2020Updated 5 years ago
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆30Feb 8, 2018Updated 8 years ago
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆29Jul 13, 2019Updated 6 years ago
- [CVPR 2021] A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts☆63Jan 2, 2023Updated 3 years ago
- Interpretability Methods for tf.keras models with Tensorflow 2.x☆1,036Jun 3, 2024Updated last year
- Pytorch implementation of Google TCAV☆10Jan 11, 2019Updated 7 years ago
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆761Aug 25, 2020Updated 5 years ago
- Source code/webpage/demos for the What-If Tool☆992Mar 11, 2026Updated last week
- Fit interpretable models. Explain blackbox machine learning.☆6,816Updated this week
- Interpretability and explainability of data and machine learning models☆1,768Feb 26, 2025Updated last year
- Code for "High-Precision Model-Agnostic Explanations" paper☆812Jul 19, 2022Updated 3 years ago
- Explaining Image Classifiers by Counterfactual Generation☆28Apr 23, 2022Updated 3 years ago
- ☆226Oct 25, 2020Updated 5 years ago
- ☆26Aug 30, 2021Updated 4 years ago
- Research code for auditing and exploring black box machine-learning models.☆132May 24, 2023Updated 2 years ago
- ☆134Aug 7, 2019Updated 6 years ago
- Lime: Explaining the predictions of any machine learning classifier☆12,105Jul 25, 2024Updated last year
- Light version of Network Dissection for Quantifying Interpretability of Networks☆221May 6, 2019Updated 6 years ago
- A curated list of awesome responsible machine learning resources.☆3,998Updated this week
- ☆24Sep 28, 2021Updated 4 years ago