jungokasai / deep-shallow
☆43Updated 4 years ago
Alternatives and similar repositories for deep-shallow:
Users that are interested in deep-shallow are comparing it to the libraries listed below
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆30Updated 2 years ago
- ☆22Updated 3 years ago
- ☆42Updated 4 years ago
- Code for the paper "Modelling Latent Translations for Cross-Lingual Transfer"☆17Updated 3 years ago
- PyTorch implementation of NAACL 2021 paper "Multi-view Subword Regularization"☆24Updated 3 years ago
- The implementation of "Neural Machine Translation without Embeddings", NAACL 2021☆33Updated 3 years ago
- ☆21Updated 3 years ago
- ☆20Updated 4 years ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"☆26Updated 3 years ago
- Official code for the ICLR 2020 paper 'ARE PPE-TRAINED LANGUAGE MODELS AWARE OF PHRASES? SIMPLE BUT STRONG BASELINES FOR GRAMMAR INDCUTIO…☆30Updated last year
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- ☆21Updated 2 years ago
- ☆22Updated 4 years ago
- ☆45Updated 3 years ago
- ☆46Updated 2 years ago
- Code for the paper "Balancing Training for Multilingual Neural Machine Translation, ACL 2020"☆23Updated 3 years ago
- ☆41Updated 3 years ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆55Updated 2 years ago
- source code of NAACL2021 "PCFGs Can Do Better: Inducing Probabilistic Context-Free Grammars with Many Symbols“ and ACL2021 main conferenc…☆48Updated 11 months ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆56Updated 2 years ago
- ☆28Updated 2 years ago
- DisCo Transformer for Non-autoregressive MT☆78Updated 2 years ago
- INSET: Sentence Infilling with Inter-sentential Transformer☆30Updated 4 years ago
- ☆47Updated 4 years ago
- Code to run the TILT transfer learning experiments☆31Updated 4 years ago
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)☆30Updated 3 years ago
- Source code for the paper "Multilingual Neural Machine Translation with Soft Decoupled Encoding"☆29Updated 3 years ago
- Paper: Lexicon Learning for Few-Shot Neural Sequence Modeling☆15Updated 3 years ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Updated last year
- Implementation of ICLR 2020 paper "Revisiting Self-Training for Neural Sequence Generation"☆46Updated 2 years ago