YilunZhou / ExSum
Code repository for the NAACL 2022 paper "ExSum: From Local Explanations to Model Understanding"
☆64Updated 3 years ago
Alternatives and similar repositories for ExSum:
Users that are interested in ExSum are comparing it to the libraries listed below
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆42Updated 2 months ago
- Embedding Recycling for Language models☆38Updated last year
- Weakly Supervised End-to-End Learning (NeurIPS 2021)☆156Updated 2 years ago
- Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling☆31Updated 4 years ago
- Ranking of fine-tuned HF models as base models.☆35Updated last year
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆82Updated 3 years ago
- ☆136Updated last year
- Lightning template for easy prototyping⚡️☆13Updated 2 years ago
- ☆37Updated 3 years ago
- ☆37Updated last year
- Library for creating causal chains using language models.☆78Updated 2 years ago
- A Natural Language Interface to Explainable Boosting Machines☆66Updated 10 months ago
- Code for gradient rollback, which explains predictions of neural matrix factorization models, as for example used for knowledge base comp…☆21Updated 4 years ago
- ☆18Updated 3 years ago
- ☆44Updated 5 months ago
- A python package for benchmarking interpretability techniques on Transformers.☆212Updated 7 months ago
- ☆54Updated last year
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- Course for Interpreting ML Models☆52Updated 2 years ago
- Shared code for training sentence embeddings with Flax / JAX☆27Updated 3 years ago
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"☆48Updated 2 years ago
- ☆46Updated 4 years ago
- ☆31Updated 2 years ago
- Measuring if attention is explanation with ROAR☆22Updated 2 years ago
- A diff tool for language models☆42Updated last year
- RATransformers 🐭- Make your transformer (like BERT, RoBERTa, GPT-2 and T5) Relation Aware!☆41Updated 2 years ago
- diagNNose is a Python library that facilitates a broad set of tools for analysing hidden activations of neural models.☆81Updated last year
- ☆30Updated 3 years ago
- code for the paper "Cluster & Tune: Boost Cold Start Performance in Text Classification" for ACL2022☆28Updated 2 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 3 years ago