dilyabareeva / quandaLinks
A toolkit for quantitative evaluation of data attribution methods.
β54Updated 6 months ago
Alternatives and similar repositories for quanda
Users that are interested in quanda are comparing it to the libraries listed below
Sorting:
- π Overcomplete is a Vision-based SAE Toolboxβ117Updated last month
- Mechanistic understanding and validation of large AI models with SemanticLensβ50Updated last month
- Layer-wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]β218Updated 6 months ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ40Updated last year
- A fast, effective data attribution method for neural networks in PyTorchβ227Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ252Updated last year
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Worksβ¦β19Updated last year
- PyTorch Explain: Interpretable Deep Learning in Python.β168Updated last year
- [NeurIPS 2025 MechInterp Workshop - Spotlight] Official implementation of the paper "RelP: Faithful and Efficient Circuit Discovery in Laβ¦β24Updated 2 months ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ70Updated 3 years ago
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β71Updated 2 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ140Updated 2 weeks ago
- πͺ Interpreto is an interpretability toolbox for LLMsβ124Updated last week
- Training and evaluating NBM and SPAM for interpretable machine learning.β78Updated 2 years ago
- Research on Tabular Foundation Modelsβ68Updated last year
- β32Updated 2 years ago
- Dataset and code for the CLEVR-XAI dataset.β33Updated 2 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AIβ55Updated 3 years ago
- Sparse and discrete interpretability tool for neural networksβ64Updated last year
- A simple PyTorch implementation of influence functions.β92Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β239Updated 5 months ago
- [NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models" πβ45Updated last year
- Uncertainty-aware representation learning (URL) benchmarkβ106Updated 10 months ago
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer viβ¦β75Updated 3 years ago
- Recycling diverse modelsβ46Updated 3 years ago
- β33Updated last year
- relplot: Utilities for measuring calibration and plotting reliability diagramsβ179Updated 2 months ago
- Code for Language-Interfaced FineTuning for Non-Language Machine Learning Tasks.β133Updated last year
- Data-OOB: Out-of-bag Estimate as a Simple and Efficient Data Value (ICML 2023)β21Updated 2 years ago
- β47Updated 3 years ago