mohsenfayyaz / GlobEnc
[NAACL 2022] GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers
☆21Updated last year
Alternatives and similar repositories for GlobEnc:
Users that are interested in GlobEnc are comparing it to the libraries listed below
- Measuring the Mixing of Contextual Information in the Transformer☆28Updated last year
- DecompX: Explaining Transformers Decisions by Propagating Token Decomposition☆17Updated last year
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆30Updated 3 years ago
- ☆39Updated 3 years ago
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆31Updated 3 years ago
- Debiasing Methods in Natural Language Understanding Make Bias More Accessible: Code and Data☆14Updated 2 years ago
- ☆31Updated last year
- Code for paper "Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?"☆21Updated 4 years ago
- ☆58Updated 2 years ago
- ☆24Updated 2 years ago
- [EMNLP 2020] Collective HumAn OpinionS on Natural Language Inference Data☆36Updated 2 years ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆37Updated 2 years ago
- The evaluation pipeline for the 2024 BabyLM Challenge.☆29Updated 4 months ago
- Rationales for Sequential Predictions☆40Updated 3 years ago
- ☆22Updated 3 years ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆55Updated 2 years ago
- Code of NAACL 2022 "Efficient Hierarchical Domain Adaptation for Pretrained Language Models" paper.☆32Updated last year
- ☆44Updated last year
- ☆21Updated 10 months ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"☆26Updated 3 years ago
- ☆21Updated 2 years ago
- ☆45Updated 3 years ago
- ☆44Updated 4 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 2 years ago
- Query-focused summarization data☆41Updated 2 years ago
- UDapter is a multilingual dependency parser that uses "contextual" adapters together with language-typology features for language-specifi…☆30Updated 2 years ago
- ☆17Updated 2 years ago
- ☆30Updated 9 months ago
- How Contextual are Contextualized Word Representations?☆41Updated 4 years ago
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'☆17Updated 3 years ago