babylm / baseline-pretraining
Code for pre-training BabyLM baseline models.
☆14Updated last year
Alternatives and similar repositories for baseline-pretraining:
Users that are interested in baseline-pretraining are comparing it to the libraries listed below
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆57Updated 10 months ago
- ☆46Updated 3 years ago
- Mamba training library developed by kotoba technologies☆69Updated last year
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆71Updated last year
- LTG-Bert☆32Updated last year
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Updated 3 months ago
- Checkpointable dataset utilities for foundation model training☆32Updated last year
- ☆48Updated 4 months ago
- Repository for the code of the "PPL-MCTS: Constrained Textual Generation Through Discriminator-Guided Decoding" paper, NAACL'22☆65Updated 2 years ago
- ☆44Updated 5 months ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆55Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆26Updated last year
- ☆19Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆37Updated last year
- Simple-to-use scoring function for arbitrarily tokenized texts.☆39Updated 2 months ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆75Updated last year
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆32Updated last year
- Code for Zero-Shot Tokenizer Transfer☆127Updated 3 months ago
- Code repository for the c-BTM paper☆106Updated last year
- ☆34Updated 10 months ago
- An unofficial implementation of the Infini-gram model proposed by Liu et al. (2024)☆30Updated 10 months ago
- ☆72Updated 11 months ago
- Do Multilingual Language Models Think Better in English?☆41Updated last year
- Code for SaGe subword tokenizer (EACL 2023)☆24Updated 4 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆48Updated last year
- Experiments for efforts to train a new and improved t5☆77Updated last year
- ☆33Updated 2 weeks ago
- The evaluation pipeline for the 2024 BabyLM Challenge.☆30Updated 5 months ago
- Example code for prefix-tuning GPT/GPT-NeoX models and for inference with trained prefixes☆12Updated 2 years ago