mtsang / archipelagoLinks
Official Code Repo for the Paper: "How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions", In NeurIPS 2020
☆39Updated 2 years ago
Alternatives and similar repositories for archipelago
Users that are interested in archipelago are comparing it to the libraries listed below
Sorting:
- ☆89Updated last month
- This is the official implementation for the paper "Learning to Scaffold: Optimizing Model Explanations for Teaching"☆19Updated 3 years ago
- Code for paper "Search Methods for Sufficient, Socially-Aligned Feature Importance Explanations with In-Distribution Counterfactuals"☆18Updated 2 years ago
- Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning (AISTATS 2022 Oral)☆41Updated 2 years ago
- ☆89Updated 2 years ago
- ☆27Updated last year
- Learning the Difference that Makes a Difference with Counterfactually-Augmented Data☆170Updated 4 years ago
- ☆26Updated 2 years ago
- Code for "Rissanen Data Analysis: Examining Dataset Characteristics via Description Length" by Ethan Perez, Douwe Kiela, and Kyungyhun Ch…☆36Updated 3 years ago
- A Diagnostic Study of Explainability Techniques for Text Classification☆67Updated 4 years ago
- Code for gradient rollback, which explains predictions of neural matrix factorization models, as for example used for knowledge base comp…☆21Updated 4 years ago
- Demo for method introduced in "Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs"☆56Updated 4 years ago
- Implementation of experiments in paper "Learning from Rules Generalizing Labeled Exemplars" to appear in ICLR2020 (https://openreview.net…☆50Updated 2 years ago
- Pytorch implementation of DiffMask☆55Updated last year
- diagNNose is a Python library that facilitates a broad set of tools for analysing hidden activations of neural models.☆81Updated last year
- Interpretable Neural Predictions with Differentiable Binary Variables☆84Updated 4 years ago
- Source code for "Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models", ICLR 2020.☆30Updated 4 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆128Updated 3 years ago
- ☆24Updated 3 years ago
- Code for "Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?"☆45Updated last year
- Code for paper "When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data"☆14Updated 4 years ago
- A benchmark for understanding and evaluating rationales: http://www.eraserbenchmark.com/☆96Updated 2 years ago
- OOD Generalization and Detection (ACL 2020)☆60Updated 5 years ago
- ☆63Updated 5 years ago
- Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021)☆27Updated 3 years ago
- Model zoo for different kinds of uncertainty quantification methods used in Natural Language Processing, implemented in PyTorch.☆53Updated 2 years ago
- Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling☆31Updated 4 years ago
- This is a repository with the code for the EMNLP 2020 paper "Information-Theoretic Probing with Minimum Description Length"☆71Updated 9 months ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆66Updated 2 years ago
- ☆39Updated 6 years ago