dilyabareeva / quandaLinks
A toolkit for quantitative evaluation of data attribution methods.
β53Updated 3 months ago
Alternatives and similar repositories for quanda
Users that are interested in quanda are comparing it to the libraries listed below
Sorting:
- π Overcomplete is a Vision-based SAE Toolboxβ96Updated 3 months ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ39Updated last year
- Layer-wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]β196Updated 3 months ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ70Updated 2 years ago
- Framework code with wandb, checkpointing, logging, configs, experimental protocols. Useful for fine-tuning models or training from scratcβ¦β151Updated 2 years ago
- A fast, effective data attribution method for neural networks in PyTorchβ220Updated 11 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ248Updated last year
- [NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models" πβ45Updated 11 months ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ136Updated last year
- Inference code for "TabDPT: Scaling Tabular Foundation Models on Real Data"β60Updated 2 weeks ago
- Research on Tabular Foundation Modelsβ58Updated 10 months ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AIβ54Updated 3 years ago
- Training and evaluating NBM and SPAM for interpretable machine learning.β78Updated 2 years ago
- Mechanistic understanding and validation of large AI models with SemanticLensβ41Updated last month
- β133Updated 2 weeks ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β233Updated 3 months ago
- β22Updated 6 months ago
- β49Updated 9 months ago
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Worksβ¦β19Updated last year
- Conformal Language Modelingβ32Updated last year
- Sparse and discrete interpretability tool for neural networksβ64Updated last year
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer viβ¦β71Updated 2 years ago
- β355Updated 2 months ago
- OpenDataVal: a Unified Benchmark for Data Valuation in Python (NeurIPS 2023)β99Updated 8 months ago
- PyTorch Explain: Interpretable Deep Learning in Python.β163Updated last year
- β241Updated last year
- Attribution-based Parameter Decompositionβ31Updated 4 months ago
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β68Updated 2 years ago
- β32Updated 11 months ago
- β62Updated 3 years ago