maxdreyer / Reveal2Revise
Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.
☆19Updated last year
Alternatives and similar repositories for Reveal2Revise:
Users that are interested in Reveal2Revise are comparing it to the libraries listed below
- Prototypical Concept-based Explanations, accepted at SAIAD workshop at CVPR 2024.☆13Updated last month
- CoSy: Evaluating Textual Explanations☆16Updated 2 months ago
- ☆11Updated 3 months ago
- Concept Relevance Propagation for Localization Models, accepted at SAIAD workshop at CVPR 2023.☆14Updated last year
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆124Updated 10 months ago
- [ICLR 2023 spotlight] MEDFAIR: Benchmarking Fairness for Medical Imaging☆62Updated last year
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆89Updated 2 years ago
- ☆39Updated 11 months ago
- PyTorch Transformer-based Language Model Implementation of ConceptSHAP☆14Updated 4 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆34Updated 11 months ago
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.☆39Updated 5 months ago
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆77Updated 10 months ago
- Official Code for the ACCV 2022 paper Diffusion Models for Counterfactual Explanations☆25Updated last month
- ☆11Updated last year
- Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations is a ServiceNow Research project that was started at Elemen…☆13Updated last year
- Explain Neural Networks using Layer-Wise Relevance Propagation and evaluate the explanations using Pixel-Flipping and Area Under the Curv…☆15Updated 2 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆204Updated 2 years ago
- FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods (ICCV 2023)☆16Updated last year
- reference implementation for "explanations can be manipulated and geometry is to blame"☆36Updated 2 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆220Updated 8 months ago
- [ICML 2023] Change is Hard: A Closer Look at Subpopulation Shift☆107Updated last year
- Dataset and code for the CLEVR-XAI dataset.☆31Updated last year
- This repository contains the implementation of Concept Activation Regions, a new framework to explain deep neural networks with human con…☆11Updated 2 years ago
- Implementation of Concept-level Debugging of Part-Prototype Networks☆12Updated last year
- Python package to accelerate research on generalized out-of-distribution (OOD) detection.☆12Updated 9 months ago
- Pruning CNN using CNN with toy example☆20Updated 3 years ago
- Official repository of ICML 2023 paper: Dividing and Conquering a BlackBox to a Mixture of Interpretable Models: Route, Interpret, Repeat☆23Updated last year
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper…☆61Updated 2 weeks ago
- ☆11Updated 2 months ago
- [ICLR 23] A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled c…☆94Updated last year