AmeenAli / XAI_Transformers
Official Code Implementation of the paper : XAI for Transformers: Better Explanations through Conservative Propagation
☆63Updated 3 years ago
Alternatives and similar repositories for XAI_Transformers:
Users that are interested in XAI_Transformers are comparing it to the libraries listed below
- Layer-Wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]☆149Updated last month
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆54Updated 2 years ago
- A toolkit for quantitative evaluation of data attribution methods.☆44Updated this week
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆34Updated last year
- "Understanding Dataset Difficulty with V-Usable Information" (ICML 2022, outstanding paper)☆85Updated last year
- This is the official implementation of the paper "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision…☆28Updated last year
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆74Updated 3 years ago
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆89Updated 2 years ago
- Code for the paper "A Whac-A-Mole Dilemma Shortcuts Come in Multiples Where Mitigating One Amplifies Others"☆48Updated 9 months ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆124Updated 10 months ago
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper…☆61Updated 3 weeks ago
- ☆66Updated 3 years ago
- [ICLR 23] A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled c…☆94Updated last year
- Code for paper: Are Large Language Models Post Hoc Explainers?☆31Updated 9 months ago
- ☆45Updated 2 years ago
- A repository for summaries of recent explainable AI/Interpretable ML approaches☆74Updated 6 months ago
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆77Updated 11 months ago
- Code for "Surgical Fine-Tuning Improves Adaptation to Distribution Shifts" published at ICLR 2023☆29Updated last year
- SpuCo is a Python package developed to further research to address spurious correlations.☆24Updated 3 months ago
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNet☆30Updated last year
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'☆17Updated 2 years ago
- PyTorch Explain: Interpretable Deep Learning in Python.☆154Updated 11 months ago
- Concept Bottleneck Models, ICML 2020☆197Updated 2 years ago
- ☆28Updated last year
- Repository for research works and resources related to model reprogramming <https://arxiv.org/abs/2202.10629>☆61Updated last year
- Code for Language-Interfaced FineTuning for Non-Language Machine Learning Tasks.☆125Updated 5 months ago
- NumPy library for calibration metrics☆70Updated last month
- [ICML 2022] This is the pytorch implementation of "Rethinking Attention-Model Explainability through Faithfulness Violation Test" (https:…☆19Updated 2 years ago
- [NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models".☆38Updated 5 months ago
- ☆91Updated 2 months ago