ahmedmagdiosman / clevr-xaiLinks
Dataset and code for the CLEVR-XAI dataset.
☆32Updated 2 years ago
Alternatives and similar repositories for clevr-xai
Users that are interested in clevr-xai are comparing it to the libraries listed below
Sorting:
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆136Updated last year
- Repository for our NeurIPS 2022 paper "Concept Embedding Models", our NeurIPS 2023 paper "Learning to Receive Help", and our ICML 2025 pa…☆69Updated 2 weeks ago
- Concept Bottleneck Models, ICML 2020☆218Updated 2 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆39Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆233Updated 2 months ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆53Updated 3 years ago
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆84Updated last year
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆54Updated 3 years ago
- LENS Project☆50Updated last year
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆219Updated 3 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆248Updated last year
- ☆122Updated 3 years ago
- implements some LRP rules to get explanations for Resnets and Densenet-121, including batchnorm-Conv canonization and tensorbiased layers…☆25Updated last year
- Reliability diagrams visualize whether a classifier model needs calibration☆158Updated 3 years ago
- A toolkit for quantitative evaluation of data attribution methods.☆53Updated 3 months ago
- [ICLR 23] A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled c…☆116Updated last year
- Detect model's attention☆168Updated 5 years ago
- Official repository for CMU Machine Learning Department's 10732: Robustness and Adaptivity in Shifting Environments☆75Updated 2 years ago
- This repository contains the code of the distribution shift framework presented in A Fine-Grained Analysis on Distribution Shift (Wiles e…☆84Updated 4 months ago
- The repository contains lists of papers on causality and how relevant techniques are being used to further enhance deep learning era comp…☆98Updated 2 years ago
- PyTorch Explain: Interpretable Deep Learning in Python.☆163Updated last year
- This is a list of awesome prototype-based papers for explainable artificial intelligence.☆39Updated 2 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆38Updated 3 years ago
- Framework code with wandb, checkpointing, logging, configs, experimental protocols. Useful for fine-tuning models or training from scratc…☆151Updated 2 years ago
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆97Updated 2 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆75Updated 3 years ago
- Code for the ICLR 2022 paper. Salient Imagenet: How to discover spurious features in deep learning?☆40Updated 3 years ago
- [ICML 2023] Change is Hard: A Closer Look at Subpopulation Shift☆110Updated 2 years ago
- ☆46Updated 2 years ago
- [NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.☆18Updated 4 months ago