dilyabareeva / quandaLinks
A toolkit for quantitative evaluation of data attribution methods.
β47Updated last month
Alternatives and similar repositories for quanda
Users that are interested in quanda are comparing it to the libraries listed below
Sorting:
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ34Updated last year
- π Overcomplete is a Vision-based SAE Toolboxβ58Updated 2 months ago
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β64Updated last year
- Layer-Wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]β165Updated 2 months ago
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNetβ32Updated last year
- TabDPT: Scaling Tabular Foundation Modelsβ27Updated last month
- β45Updated 2 years ago
- CoSy: Evaluating Textual Explanationsβ16Updated 4 months ago
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Worksβ¦β14Updated last year
- A fast, effective data attribution method for neural networks in PyTorchβ211Updated 6 months ago
- β13Updated 3 weeks ago
- β28Updated 2 years ago
- β97Updated last month
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.β44Updated 7 months ago
- Data for "Datamodels: Predicting Predictions with Training Data"β97Updated 2 years ago
- β11Updated last month
- Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers, Paper accepted at eXCV workshop of ECCV 2β¦β24Updated 5 months ago
- β60Updated 3 years ago
- Attribution-based Parameter Decompositionβ24Updated this week
- A simple PyTorch implementation of influence functions.β88Updated 11 months ago
- NeuroSurgeon is a package that enables researchers to uncover and manipulate subnetworks within models in Huggingface Transformersβ41Updated 3 months ago
- β94Updated 3 months ago
- Code for paper: Are Large Language Models Post Hoc Explainers?β31Updated 10 months ago
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paperβ¦β62Updated 2 weeks ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ66Updated 2 years ago
- Sparse Autoencoder Training Libraryβ52Updated last month
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer viβ¦β66Updated 2 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ128Updated 11 months ago
- β25Updated 3 months ago
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.β19Updated last year