mohsenfayyaz / GlobEnc
[NAACL 2022] GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers
☆21Updated last year
Alternatives and similar repositories for GlobEnc:
Users that are interested in GlobEnc are comparing it to the libraries listed below
- DecompX: Explaining Transformers Decisions by Propagating Token Decomposition☆18Updated last year
- Measuring the Mixing of Contextual Information in the Transformer☆29Updated last year
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆37Updated 2 years ago
- Query-focused summarization data☆41Updated 2 years ago
- ☆39Updated 3 years ago
- ☆22Updated 3 years ago
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆30Updated 3 years ago
- Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding☆18Updated 2 years ago
- ☆31Updated last year
- ☆44Updated 4 years ago
- Code of NAACL 2022 "Efficient Hierarchical Domain Adaptation for Pretrained Language Models" paper.☆32Updated last year
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)☆30Updated 3 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- Benchmark API for Multidomain Language Modeling☆24Updated 2 years ago
- The evaluation pipeline for the 2024 BabyLM Challenge.☆30Updated 5 months ago
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆31Updated 3 years ago
- ☆58Updated 2 years ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆75Updated last year
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"☆26Updated 3 years ago
- Automatic metrics for GEM tasks☆65Updated 2 years ago
- ☆19Updated last year
- The geometry of multilingual language model representations (EMNLP 2022).☆20Updated 2 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 2 years ago
- ☆24Updated 2 years ago
- ☆61Updated last year
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆56Updated 2 years ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆55Updated 2 years ago
- PyTorch reimplementation of REALM and ORQA☆22Updated 3 years ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆33Updated 3 years ago
- ☆84Updated last year