Karim-53 / Compare-xAI
A Unified Approach to Evaluate and Compare Explainable AI methods
☆14Updated last year
Alternatives and similar repositories for Compare-xAI:
Users that are interested in Compare-xAI are comparing it to the libraries listed below
- Code for paper: Are Large Language Models Post Hoc Explainers?☆28Updated 5 months ago
- A framework for assessing and improving classification fairness.☆33Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆237Updated 5 months ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆60Updated last year
- A Diagnostic Study of Explainability Techniques for Text Classification☆66Updated 4 years ago
- Code for paper "Search Methods for Sufficient, Socially-Aligned Feature Importance Explanations with In-Distribution Counterfactuals"☆17Updated 2 years ago
- ☆118Updated last year
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆74Updated 2 years ago
- Uncertainty Quantification with Pre-trained Language Models: An Empirical Analysis☆14Updated 2 years ago
- On Explaining Your Explanations of BERT: An Empirical Study with Sequence Classification☆30Updated 2 years ago
- [NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evalua…☆31Updated last year
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆38Updated 9 months ago
- A2T: Towards Improving Adversarial Training of NLP Models (EMNLP 2021 Findings)☆26Updated 3 years ago
- ☆86Updated last year
- Code for "Astraea: Grammar-based Fairness Testing"☆9Updated 3 years ago
- The TABLET benchmark for evaluating instruction learning with LLMs for tabular prediction.☆20Updated last year
- Fairness toolkit for pytorch, scikit learn and autogluon☆31Updated last month
- tianlu-wang / Identifying-and-Mitigating-Spurious-Correlations-for-Improving-Robustness-in-NLP-ModelsNAACL 2022 Findings☆16Updated 2 years ago
- "Understanding Dataset Difficulty with V-Usable Information" (ICML 2022, outstanding paper)☆82Updated last year
- ☆26Updated 2 years ago
- A repository for summaries of recent explainable AI/Interpretable ML approaches☆69Updated 3 months ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆81Updated 2 years ago
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"☆47Updated 2 years ago
- Official Code Repo for the Paper: "How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions", In NeurIPS 2…☆37Updated 2 years ago
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆19Updated 2 years ago
- ☆23Updated 5 months ago
- ACL 2022: An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models.☆128Updated last month
- Data set for LREC 2020 paper "I Feel Offended, Don't Be Abusive!"☆19Updated last year
- For easy metric logging and visualization☆14Updated 3 weeks ago
- ☆42Updated 11 months ago