☆265Sep 9, 2021Updated 4 years ago
Alternatives and similar repositories for attention_flow
Users that are interested in attention_flow are comparing it to the libraries listed below
Sorting:
- Explainability for Vision Transformers☆1,064Mar 12, 2022Updated 3 years ago
- [CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize …☆1,976Jan 24, 2024Updated 2 years ago
- ☆87Apr 16, 2024Updated last year
- Measuring the Mixing of Contextual Information in the Transformer☆34May 27, 2023Updated 2 years ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆900Aug 24, 2023Updated 2 years ago
- [ICML 2022] This is the pytorch implementation of "Rethinking Attention-Model Explainability through Faithfulness Violation Test" (https:…☆20Jul 21, 2022Updated 3 years ago
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'☆17Jun 24, 2022Updated 3 years ago
- This is the official implementation for the paper "Learning to Scaffold: Optimizing Model Explanations for Teaching"☆20May 19, 2022Updated 3 years ago
- Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)☆2,122Jun 7, 2022Updated 3 years ago
- Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"☆21Dec 10, 2021Updated 4 years ago
- [NAACL 2022] GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers☆21May 16, 2023Updated 2 years ago
- ☆11Dec 23, 2021Updated 4 years ago
- [NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models" 🐍☆45Nov 6, 2024Updated last year
- Codes and scripts for "Explainable Semantic Space by Grounding Languageto Vision with Cross-Modal Contrastive Learning"☆20Mar 23, 2022Updated 3 years ago
- Debiasing Methods in Natural Language Understanding Make Bias More Accessible: Code and Data☆14Apr 24, 2022Updated 3 years ago
- Interpreting Language Models with Contrastive Explanations (EMNLP 2022 Best Paper Honorable Mention)☆62May 12, 2022Updated 3 years ago
- ☆18Apr 27, 2023Updated 2 years ago
- Official Code Implementation of the paper : XAI for Transformers: Better Explanations through Conservative Propagation☆67Feb 14, 2022Updated 4 years ago
- A Diagnostic Study of Explainability Techniques for Text Classification☆69Oct 23, 2020Updated 5 years ago
- This is a repository with the code for the ACL 2019 paper "Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, t…☆319Aug 2, 2021Updated 4 years ago
- Pytorch implementation of "All Tokens Matter: Token Labeling for Training Better Vision Transformers"☆433Sep 5, 2023Updated 2 years ago
- On Explaining Your Explanations of BERT: An Empirical Study with Sequence Classification☆29Nov 30, 2022Updated 3 years ago
- codes for paper "Interpretability-Aware Vision Transformer"☆22Sep 14, 2023Updated 2 years ago
- Implementation of the paper "CXR-IRGen: An Integrated Vision and Language Model for the Generation of Clinically Accurate Chest X-Ray Ima…☆21Jul 2, 2024Updated last year
- Code & Data for the paper "Learning to Deceive with Attention-Based Explanations"☆18Jan 22, 2021Updated 5 years ago
- Landing page for MIB: A Mechanistic Interpretability Benchmark☆24Aug 15, 2025Updated 6 months ago
- Measuring if attention is explanation with ROAR☆22Mar 3, 2023Updated 2 years ago
- Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.☆1,410Aug 30, 2023Updated 2 years ago
- A Visual Analysis Tool to Explore Learned Representations in Transformers Models☆603Feb 7, 2024Updated 2 years ago
- Code for the paper "Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers"☆18Dec 15, 2020Updated 5 years ago
- ☆14Apr 29, 2025Updated 10 months ago
- Visualizing the learned space-time attention using Attention Rollout☆40Apr 1, 2022Updated 3 years ago
- CartoonX is a saliency map method for image classifiers operating in the wavelet/shearlet domain. It extracts the relevant piecewise-smoo…☆10Feb 20, 2026Updated last week
- ☆46Jun 20, 2024Updated last year
- Code for the ICLR 2022 paper "Attention-based interpretability with Concept Transformers"☆42Sep 17, 2025Updated 5 months ago
- Official Code for Towards Transparent and Explainable Attention Models paper (ACL 2020)☆35Jun 22, 2022Updated 3 years ago
- ☆20Jan 16, 2024Updated 2 years ago
- Official Code Repo for the Paper: "How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions", In NeurIPS 2…☆42Oct 31, 2022Updated 3 years ago
- Exploiting Inter-sample and Inter-feature Relations in Dataset Distillation (CVPR24)☆11Jun 16, 2024Updated last year