alexandra-chron / hierarchical-domain-adaptationLinks
Code of NAACL 2022 "Efficient Hierarchical Domain Adaptation for Pretrained Language Models" paper.
☆32Updated 2 years ago
Alternatives and similar repositories for hierarchical-domain-adaptation
Users that are interested in hierarchical-domain-adaptation are comparing it to the libraries listed below
Sorting:
- Code base for the EMNLP 2021 Findings paper: Cartography Active Learning☆14Updated 5 months ago
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆75Updated last year
- [NAACL 2022] GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers☆21Updated 2 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆34Updated 4 years ago
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆30Updated 3 years ago
- ☆25Updated 3 years ago
- Code for the EMNLP 2021 Paper "Active Learning by Acquiring Contrastive Examples" & the ACL 2022 Paper "On the Importance of Effectively …☆128Updated 3 years ago
- ☆13Updated 3 years ago
- ☆39Updated 4 years ago
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆54Updated 3 years ago
- [EMNLP 2020] Collective HumAn OpinionS on Natural Language Inference Data☆40Updated 3 years ago
- Measuring the Mixing of Contextual Information in the Transformer☆32Updated 2 years ago
- Automatic metrics for GEM tasks☆67Updated 3 years ago
- Debiasing Methods in Natural Language Understanding Make Bias More Accessible: Code and Data☆14Updated 3 years ago
- Benchmark API for Multidomain Language Modeling☆25Updated 3 years ago
- ☆22Updated 4 years ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆39Updated 2 years ago
- ☆24Updated 4 years ago
- ☆87Updated last year
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)☆29Updated 3 years ago
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"☆50Updated 3 years ago
- ☆47Updated last year
- The Multitask Long Document Benchmark☆41Updated 3 years ago
- UDapter is a multilingual dependency parser that uses "contextual" adapters together with language-typology features for language-specifi…☆31Updated 2 years ago
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆113Updated 3 years ago
- ☆58Updated 3 years ago
- Query-focused summarization data☆42Updated 2 years ago
- A software for transferring pre-trained English models to foreign languages☆19Updated 2 years ago