dilyabareeva / quandaLinks
A toolkit for quantitative evaluation of data attribution methods.
β55Updated 6 months ago
Alternatives and similar repositories for quanda
Users that are interested in quanda are comparing it to the libraries listed below
Sorting:
- π Overcomplete is a Vision-based SAE Toolboxβ119Updated 2 months ago
- Mechanistic understanding and validation of large AI models with SemanticLensβ50Updated 2 months ago
- Layer-wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]β219Updated 7 months ago
- PyTorch Explain: Interpretable Deep Learning in Python.β169Updated last year
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ40Updated last year
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ71Updated 3 years ago
- Inference code for "TabDPT: Scaling Tabular Foundation Models on Real Data"β73Updated last week
- A fast, effective data attribution method for neural networks in PyTorchβ229Updated last year
- Research on Tabular Foundation Modelsβ69Updated last year
- [NeurIPS 2025 MechInterp Workshop - Spotlight] Official implementation of the paper "RelP: Faithful and Efficient Circuit Discovery in Laβ¦β25Updated 3 months ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AIβ55Updated 3 years ago
- Training and evaluating NBM and SPAM for interpretable machine learning.β78Updated 2 years ago
- Dataset and code for the CLEVR-XAI dataset.β33Updated 2 years ago
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer viβ¦β74Updated 3 years ago
- A simple PyTorch implementation of influence functions.β92Updated last year
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ140Updated 3 weeks ago
- relplot: Utilities for measuring calibration and plotting reliability diagramsβ180Updated 3 months ago
- [NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models" πβ45Updated last year
- Conformal Language Modelingβ31Updated 2 years ago
- πͺ Interpreto is an interpretability toolbox for LLMsβ141Updated this week
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β239Updated 2 weeks ago
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Worksβ¦β20Updated last year
- PyTorch library for Active Fine-Tuningβ96Updated 4 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ252Updated last year
- β50Updated last year
- [ICLR2024] Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and Howβ33Updated 5 months ago
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β71Updated 2 years ago
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computationβ142Updated 3 weeks ago
- Influence Estimation for Gradient-Boosted Decision Treesβ29Updated last year
- β141Updated 2 years ago