rachtibat / LRP-eXplains-TransformersLinks
Layer-wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]
☆182Updated last month
Alternatives and similar repositories for LRP-eXplains-Transformers
Users that are interested in LRP-eXplains-Transformers are comparing it to the libraries listed below
Sorting:
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆130Updated last year
- A toolkit for quantitative evaluation of data attribution methods.☆53Updated last month
- [NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.☆16Updated 2 months ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆230Updated last month
- Official Code Implementation of the paper : XAI for Transformers: Better Explanations through Conservative Propagation☆63Updated 3 years ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆74Updated 10 months ago
- Using sparse coding to find distributed representations used by neural networks.☆264Updated last year
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆38Updated last year
- ☆32Updated 9 months ago
- ☆116Updated last month
- A fast, effective data attribution method for neural networks in PyTorch☆217Updated 9 months ago
- Materials for EACL2024 tutorial: Transformer-specific Interpretability☆60Updated last year
- A resource repository for representation engineering in large language models☆131Updated 9 months ago
- Mechanistic understanding and validation of large AI models with SemanticLens☆24Updated last week
- This repository collects all relevant resources about interpretability in LLMs☆369Updated 10 months ago
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.☆47Updated 9 months ago
- A simple PyTorch implementation of influence functions.☆91Updated last year
- Concept Bottleneck Models, ICML 2020☆210Updated 2 years ago
- Sparse Autoencoder for Mechanistic Interpretability☆260Updated last year
- Code for paper: Are Large Language Models Post Hoc Explainers?☆33Updated last year
- A repository for summaries of recent explainable AI/Interpretable ML approaches☆81Updated 10 months ago
- Influence Analysis and Estimation - Survey, Papers, and Taxonomy☆82Updated last year
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆96Updated 2 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated last year
- ☆146Updated last year
- The one-stop repository for large language model (LLM) unlearning. Supports TOFU, MUSE, WMDP, and many unlearning methods. All features: …☆356Updated last month
- Conformal Language Modeling☆32Updated last year
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆20Updated last year
- ☆104Updated 6 months ago
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆82Updated last year