alexandra-chron / hierarchical-domain-adaptation
Code of NAACL 2022 "Efficient Hierarchical Domain Adaptation for Pretrained Language Models" paper.
☆32Updated last year
Alternatives and similar repositories for hierarchical-domain-adaptation
Users that are interested in hierarchical-domain-adaptation are comparing it to the libraries listed below
Sorting:
- Code base for the EMNLP 2021 Findings paper: Cartography Active Learning☆14Updated last year
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆31Updated 3 years ago
- ☆39Updated 3 years ago
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆30Updated 3 years ago
- ☆24Updated 3 years ago
- ☆34Updated 3 years ago
- ☆22Updated 3 years ago
- Code for the EMNLP 2021 Paper "Active Learning by Acquiring Contrastive Examples" & the ACL 2022 Paper "On the Importance of Effectively …☆125Updated 2 years ago
- ☆31Updated last year
- ☆13Updated 3 years ago
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"☆48Updated 2 years ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆55Updated 2 years ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"☆27Updated 3 years ago
- PyTorch source code of NAACL 2021 paper "Improving the Lexical Ability of Pretrained Language Models for Unsupervised Neural Machine Tran…☆17Updated 2 years ago
- Python source code for EMNLP 2021 Findings paper: "Subword Mapping and Anchoring Across Languages".☆13Updated 3 years ago
- Benchmark API for Multidomain Language Modeling☆24Updated 2 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- [EMNLP 2020] Collective HumAn OpinionS on Natural Language Inference Data☆36Updated 3 years ago
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)☆30Updated 3 years ago
- UDapter is a multilingual dependency parser that uses "contextual" adapters together with language-typology features for language-specifi…☆31Updated 2 years ago
- Python source code for EMNLP 2020 paper "Reusing a Pretrained Language Model on Languages with Limited Corpora for Unsupervised NMT".☆35Updated 3 years ago
- Automatic metrics for GEM tasks☆66Updated 2 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- ☆24Updated 2 years ago
- ☆58Updated 3 years ago
- This is a repository with the code for the EMNLP 2020 paper "Information-Theoretic Probing with Minimum Description Length"☆71Updated 8 months ago
- Meta Representation Transformation for Low-resource Cross-lingual Learning☆40Updated 4 years ago
- This is a repository for the paper on testing inductive bias with scaled-down RoBERTa models.☆20Updated 3 years ago
- Code for NAACL 2022 paper "Reframing Human-AI Collaboration for Generating Free-Text Explanations"☆31Updated 2 years ago
- Measuring the Mixing of Contextual Information in the Transformer☆29Updated last year