konstantinjdobler / focus
[EMNLP'23] Official Code for "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models"
☆30Updated 5 months ago
Alternatives and similar repositories for focus:
Users that are interested in focus are comparing it to the libraries listed below
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning☆30Updated 2 years ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆80Updated 6 months ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆71Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- LTG-Bert☆30Updated last year
- Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages -- ACL 2023☆100Updated 11 months ago
- A software for transferring pre-trained English models to foreign languages☆18Updated 2 years ago
- NTREX -- News Test References for MT Evaluation☆81Updated 9 months ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago
- Tools for evaluating the performance of MT metrics on data from recent WMT metrics shared tasks.☆101Updated 2 weeks ago
- German Alpaca Dataset (Cleaned + Translated)☆24Updated last year
- Curriculum training☆17Updated 2 weeks ago
- Code and data for the IWSLT 2022 shared task on Formality Control for SLT☆21Updated last year
- ☆16Updated 2 years ago
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆30Updated 2 years ago
- Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages.☆74Updated 3 years ago
- ☆15Updated last year
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks☆63Updated 3 years ago
- ☆17Updated 2 years ago
- ☆85Updated 3 months ago
- A collection of datasets for language model pretraining including scripts for downloading, preprocesssing, and sampling.☆56Updated 7 months ago
- As good as new. How to successfully recycle English GPT-2 to make models for other languages (ACL Findings 2021)☆48Updated 3 years ago
- ☆97Updated 2 years ago
- Simple-to-use scoring function for arbitrarily tokenized texts.☆38Updated last month
- German small and large versions of GPT2.☆20Updated 2 years ago
- Bicleaner fork that uses neural networks☆39Updated 8 months ago
- Find informative examples to efficiently (human)-evaluate NLG models.☆10Updated last week
- This repository contains the code for the paper 'PARM: Paragraph Aggregation Retrieval Model for Dense Document-to-Document Retrieval' pu…☆40Updated 3 years ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"☆26Updated 3 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆26Updated 11 months ago