annahedstroem / MetaQuantusLinks
MetaQuantus is an XAI performance tool to identify reliable evaluation metrics
☆39Updated last year
Alternatives and similar repositories for MetaQuantus
Users that are interested in MetaQuantus are comparing it to the libraries listed below
Sorting:
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆237Updated 4 months ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆139Updated last year
- [NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.☆18Updated 5 months ago
- A toolkit for quantitative evaluation of data attribution methods.☆54Updated 4 months ago
- 👋 Influenciae is a Tensorflow Toolbox for Influence Functions☆64Updated last year
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆100Updated 8 months ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆631Updated 4 months ago
- 👋 Overcomplete is a Vision-based SAE Toolbox☆105Updated 2 weeks ago
- LENS Project☆51Updated last year
- Layer-wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]☆207Updated 4 months ago
- Dataset and code for the CLEVR-XAI dataset.☆33Updated 2 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆251Updated last year
- Simple, compact, and hackable post-hoc deep OOD detection for already trained tensorflow or pytorch image classifiers.☆60Updated 2 months ago
- ☆16Updated 7 months ago
- 👋 Xplique is a Neural Networks Explainability Toolbox☆721Updated this week
- Mechanistic understanding and validation of large AI models with SemanticLens☆47Updated 2 months ago
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.☆51Updated last year
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆70Updated 2 years ago
- ☆122Updated 3 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆70Updated 2 years ago
- Concept Relevance Propagation for Localization Models, accepted at SAIAD workshop at CVPR 2023.☆15Updated last year
- Source Code of the ROAD benchmark for feature attribution methods (ICML22)☆24Updated 2 years ago
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computation☆140Updated 2 months ago
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆20Updated last year
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆100Updated 3 years ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆33Updated 3 weeks ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆224Updated 3 years ago
- Reliability diagrams visualize whether a classifier model needs calibration☆161Updated 3 years ago
- A fairness library in PyTorch.☆32Updated last year
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆54Updated 3 years ago