lkopf / cosyLinks
[NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.
☆16Updated 2 months ago
Alternatives and similar repositories for cosy
Users that are interested in cosy are comparing it to the libraries listed below
Sorting:
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆38Updated last year
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.☆47Updated 9 months ago
- Layer-wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]☆182Updated last month
- Concept Relevance Propagation for Localization Models, accepted at SAIAD workshop at CVPR 2023.☆15Updated last year
- Mechanistic understanding and validation of large AI models with SemanticLens☆24Updated last week
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆230Updated last month
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆20Updated last year
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆130Updated last year
- Source Code of the ROAD benchmark for feature attribution methods (ICML22)☆23Updated 2 years ago
- Dataset and code for the CLEVR-XAI dataset.☆31Updated last year
- A toolkit for quantitative evaluation of data attribution methods.☆53Updated last month
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆82Updated last year
- ☆15Updated 4 months ago
- Concept Bottleneck Models, ICML 2020☆210Updated 2 years ago
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆96Updated 2 years ago
- Prototypical Concept-based Explanations, accepted at SAIAD workshop at CVPR 2024.☆16Updated 2 months ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆621Updated last month
- LENS Project☆49Updated last year
- reference implementation for "explanations can be manipulated and geometry is to blame"☆36Updated 3 years ago
- 👋 Influenciae is a Tensorflow Toolbox for Influence Functions☆64Updated last year
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Works…☆19Updated last year
- Benchmark to Evaluate EXplainable AI☆20Updated 5 months ago
- Existing literature about training-data analysis.☆17Updated 3 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated last year
- [ICLR 23] A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled c…☆110Updated last year
- implements some LRP rules to get explanations for Resnets and Densenet-121, including batchnorm-Conv canonization and tensorbiased layers…☆25Updated last year
- Explain Neural Networks using Layer-Wise Relevance Propagation and evaluate the explanations using Pixel-Flipping and Area Under the Curv…☆16Updated 3 years ago
- [ICML 2023] Change is Hard: A Closer Look at Subpopulation Shift☆108Updated 2 years ago
- Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers, Paper accepted at eXCV workshop of ECCV 2…☆29Updated 7 months ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆215Updated 3 years ago