babylm / baseline-pretraining
Code for pre-training BabyLM baseline models.
☆12Updated last year
Alternatives and similar repositories for baseline-pretraining:
Users that are interested in baseline-pretraining are comparing it to the libraries listed below
- Code for Zero-Shot Tokenizer Transfer☆120Updated last month
- Mamba training library developed by kotoba technologies☆67Updated last year
- Repository for the code of the "PPL-MCTS: Constrained Textual Generation Through Discriminator-Guided Decoding" paper, NAACL'22☆64Updated 2 years ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆75Updated last year
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆56Updated 8 months ago
- ☆44Updated 3 months ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆56Updated 2 years ago
- ☆73Updated last year
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆70Updated 11 months ago
- LTG-Bert☆29Updated last year
- Simple-to-use scoring function for arbitrarily tokenized texts.☆37Updated this week
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 2 years ago
- Sampling-Based Minimum Bayes-Risk Decoding for Neural Machine Translation☆16Updated 2 years ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆55Updated 2 years ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆81Updated 3 weeks ago
- ☆32Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆42Updated last year
- Example code for prefix-tuning GPT/GPT-NeoX models and for inference with trained prefixes☆12Updated last year
- Word acquisition in neural language models (TACL 2022).☆15Updated 3 weeks ago
- An unofficial implementation of the Infini-gram model proposed by Liu et al. (2024)☆30Updated 8 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- ☆18Updated 8 months ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- ☆72Updated 9 months ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆25Updated 10 months ago
- sigma-MoE layer☆18Updated last year
- [ACL 2023]: Training Trajectories of Language Models Across Scales https://arxiv.org/pdf/2212.09803.pdf☆22Updated last year
- Checkpointable dataset utilities for foundation model training☆32Updated last year
- ☆51Updated 9 months ago