konstantinjdobler / focusLinks
[EMNLP'23] Official Code for "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models"
☆32Updated 2 months ago
Alternatives and similar repositories for focus
Users that are interested in focus are comparing it to the libraries listed below
Sorting:
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆82Updated 10 months ago
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning☆30Updated 2 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆73Updated last year
- Code for Zero-Shot Tokenizer Transfer☆135Updated 6 months ago
- Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages -- ACL 2023☆103Updated last year
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆58Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- ☆100Updated 2 years ago
- ☆104Updated 7 months ago
- A software for transferring pre-trained English models to foreign languages☆18Updated 2 years ago
- ☆20Updated 2 years ago
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆30Updated 3 years ago
- Simple-to-use scoring function for arbitrarily tokenized texts.☆45Updated 5 months ago
- Code for the paper "Getting the most out of your tokenizer for pre-training and domain adaptation"☆20Updated last year
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks☆63Updated 3 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 3 years ago
- Tools for evaluating the performance of MT metrics on data from recent WMT metrics shared tasks.☆111Updated 4 months ago
- Ensembling Hugging Face transformers made easy☆63Updated 2 years ago
- ☆66Updated 2 years ago
- ☆51Updated 2 years ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆54Updated 2 years ago
- Do Multilingual Language Models Think Better in English?☆42Updated 2 years ago
- A collection of datasets for language model pretraining including scripts for downloading, preprocesssing, and sampling.☆59Updated last year
- A Framework aims to wisely initialize unseen subword embeddings in PLMs for efficient large-scale continued pretraining☆18Updated last year
- Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages.☆78Updated 3 years ago
- LTG-Bert☆33Updated last year
- Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback☆97Updated last year
- Official implementation of "GPT or BERT: why not both?"☆57Updated last week
- ☆54Updated 2 years ago
- Code and data for the paper "Turning English-centric LLMs Into Polyglots: How Much Multilinguality Is Needed?"☆25Updated 2 months ago