google-research / xtreme-upLinks
☆51Updated 2 years ago
Alternatives and similar repositories for xtreme-up
Users that are interested in xtreme-up are comparing it to the libraries listed below
Sorting:
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆72Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- ☆72Updated 2 years ago
- ☆66Updated last year
- Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages -- ACL 2023☆101Updated last year
- A tiny BERT for low-resource monolingual models☆31Updated 8 months ago
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆30Updated 3 years ago
- Embedding Recycling for Language models☆38Updated last year
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆54Updated 2 years ago
- ☆100Updated 2 years ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆58Updated last year
- ☆46Updated 3 years ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆76Updated last year
- LTG-Bert☆33Updated last year
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"☆27Updated 3 years ago
- This repositary hosts my experiments for the project, I did with OffNote Labs.☆10Updated 4 years ago
- Do Multilingual Language Models Think Better in English?☆41Updated last year
- Experiments on including metadata such as URLs, timestamps, website descriptions and HTML tags during pretraining.☆31Updated 2 years ago
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning☆30Updated 2 years ago
- Apps built using Inspired Cognition's Critique.☆58Updated 2 years ago
- ☆44Updated 7 months ago
- ☆21Updated 2 years ago
- Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages.☆76Updated 3 years ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆82Updated 9 months ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago
- ☆19Updated last year
- ARCHIVED. Please use https://docs.adapterhub.ml/huggingface_hub.html || 🔌 A central repository collecting pre-trained adapter modules☆68Updated last year
- Statistics on multilingual datasets☆17Updated 2 years ago
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆73Updated 10 months ago
- ☆36Updated last year