YilunZhou / ExSumLinks
Code repository for the NAACL 2022 paper "ExSum: From Local Explanations to Model Understanding"
β64Updated 3 years ago
Alternatives and similar repositories for ExSum
Users that are interested in ExSum are comparing it to the libraries listed below
Sorting:
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β82Updated 3 years ago
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.β42Updated 4 months ago
- β40Updated last year
- A diff tool for language modelsβ42Updated last year
- Ranking of fine-tuned HF models as base models.β35Updated 2 months ago
- β54Updated 2 years ago
- RATransformers π- Make your transformer (like BERT, RoBERTa, GPT-2 and T5) Relation Aware!β41Updated 2 years ago
- β18Updated 3 years ago
- β139Updated last year
- Framework for zero-shot learning with knowledge graphs.β113Updated 2 years ago
- Embedding Recycling for Language modelsβ38Updated 2 years ago
- Ensembling Hugging Face transformers made easyβ63Updated 2 years ago
- Lightning template for easy prototypingβ‘οΈβ13Updated 2 years ago
- diagNNose is a Python library that facilitates a broad set of tools for analysing hidden activations of neural models.β82Updated last year
- The Python library with command line tools to interact with Dynabench(https://dynabench.org/), such as uploading models.β55Updated 3 years ago
- A python package for benchmarking interpretability techniques on Transformers.β213Updated 9 months ago
- Measuring if attention is explanation with ROARβ22Updated 2 years ago
- β44Updated 7 months ago
- Weakly Supervised End-to-End Learning (NeurIPS 2021)β157Updated 2 years ago
- Interactive Weak Supervision: Learning Useful Heuristics for Data Labelingβ31Updated 4 years ago
- Apps built using Inspired Cognition's Critique.β58Updated 2 years ago
- Experiments on GPT-3's ability to fit numerical models in-context.β14Updated 2 years ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"β28Updated 3 years ago
- The Codebase for Causal Distillation for Language Models (NAACL '22)β25Updated 3 years ago
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"β49Updated 3 years ago
- Code for gradient rollback, which explains predictions of neural matrix factorization models, as for example used for knowledge base compβ¦β21Updated 4 years ago
- Library for creating causal chains using language models.β78Updated 2 years ago
- β22Updated 3 years ago
- β19Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago