mtsang / archipelago
Official Code Repo for the Paper: "How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions", In NeurIPS 2020
☆39Updated 2 years ago
Alternatives and similar repositories for archipelago:
Users that are interested in archipelago are comparing it to the libraries listed below
- This is the official implementation for the paper "Learning to Scaffold: Optimizing Model Explanations for Teaching"☆19Updated 2 years ago
- Code for paper "Search Methods for Sufficient, Socially-Aligned Feature Importance Explanations with In-Distribution Counterfactuals"☆17Updated 2 years ago
- How certain is your transformer?☆25Updated 4 years ago
- Learning the Difference that Makes a Difference with Counterfactually-Augmented Data☆170Updated 4 years ago
- Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling☆31Updated 4 years ago
- Code for "Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?"☆46Updated last year
- ☆89Updated this week
- Interpretable Neural Predictions with Differentiable Binary Variables☆84Updated 4 years ago
- Code for gradient rollback, which explains predictions of neural matrix factorization models, as for example used for knowledge base comp…☆21Updated 4 years ago
- ☆27Updated last year
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆128Updated 3 years ago
- Code base for the EMNLP 2021 Findings paper: Cartography Active Learning☆14Updated last year
- Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning (AISTATS 2022 Oral)☆40Updated 2 years ago
- Code for "Rissanen Data Analysis: Examining Dataset Characteristics via Description Length" by Ethan Perez, Douwe Kiela, and Kyungyhun Ch…☆36Updated 3 years ago
- Code for EMNLP 2019 paper "Attention is not not Explanation"☆58Updated 3 years ago
- Code for Paper: Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data☆35Updated 4 years ago
- Logic Explained Networks is a python repository implementing explainable-by-design deep learning models.☆49Updated last year
- ☆89Updated 2 years ago
- A Diagnostic Study of Explainability Techniques for Text Classification☆67Updated 4 years ago
- ☆24Updated 3 years ago
- Model zoo for different kinds of uncertainty quantification methods used in Natural Language Processing, implemented in PyTorch.☆53Updated 2 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆65Updated 2 years ago
- ☆63Updated 5 years ago
- This code accompanies the paper "Information-Theoretic Probing for Linguistic Structure" published in ACL 2020.☆21Updated 5 years ago
- diagNNose is a Python library that facilitates a broad set of tools for analysing hidden activations of neural models.☆81Updated last year
- Find text features that are most related to an outcome, controlling for confounds.☆60Updated 9 months ago
- Repository collecting resources and best practices to improve experimental rigour in deep learning research.☆27Updated 2 years ago
- Demo for method introduced in "Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs"☆56Updated 4 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 3 years ago
- Code for paper "When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data"☆14Updated 4 years ago