lkopf / cosy
CoSy: Evaluating Textual Explanations
☆13Updated last week
Alternatives and similar repositories for cosy:
Users that are interested in cosy are comparing it to the libraries listed below
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆32Updated 9 months ago
- A toolkit for quantitative evaluation of data attribution methods.☆38Updated this week
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆19Updated last year
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆86Updated 2 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆208Updated 5 months ago
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆72Updated 7 months ago
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆61Updated last year
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆122Updated 7 months ago
- Source Code of the ROAD benchmark for feature attribution methods (ICML22)☆20Updated last year
- Layer-Wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]☆117Updated last month
- [ICML 2023] Change is Hard: A Closer Look at Subpopulation Shift☆104Updated last year
- Prototypical Concept-based Explanations, accepted at SAIAD workshop at CVPR 2024.☆9Updated 6 months ago
- Conformal prediction for uncertainty quantification in image segmentation☆18Updated last month
- Build and train Lipschitz-constrained networks: PyTorch implementation of 1-Lipschitz layers. For TensorFlow/Keras implementation, see ht…☆27Updated this week
- Concept Relevance Propagation for Localization Models, accepted at SAIAD workshop at CVPR 2023.☆13Updated last year
- A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled concept dat…☆84Updated 9 months ago
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.☆33Updated 2 months ago
- ☆58Updated 3 years ago
- ☆27Updated 5 months ago
- Codebase for information theoretic shapley values to explain predictive uncertainty.This repo contains the code related to the paperWatso…☆19Updated 6 months ago
- Dataset and code for the CLEVR-XAI dataset.☆31Updated last year
- A fast, effective data attribution method for neural networks in PyTorch☆187Updated 2 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆237Updated 5 months ago
- An Empirical Framework for Domain Generalization In Clinical Settings☆29Updated 2 years ago
- ☆10Updated last year
- LENS Project☆44Updated 10 months ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆60Updated last year
- Reliability diagrams visualize whether a classifier model needs calibration☆144Updated 2 years ago
- h-Shap provides an exact, fast, hierarchical implementation of Shapley coefficients for image explanations☆16Updated last year