babylm / baseline-pretraining
Code for pre-training BabyLM baseline models.
☆14Updated last year
Alternatives and similar repositories for baseline-pretraining
Users that are interested in baseline-pretraining are comparing it to the libraries listed below
Sorting:
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆58Updated 11 months ago
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆32Updated last year
- ☆50Updated 5 months ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆71Updated last year
- ☆34Updated 10 months ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Code for Zero-Shot Tokenizer Transfer☆127Updated 4 months ago
- ☆14Updated last year
- Checkpointable dataset utilities for foundation model training☆32Updated last year
- List of papers on Self-Correction of LLMs.☆72Updated 4 months ago
- ☆16Updated 5 months ago
- Example of using Epochraft to train HuggingFace transformers models with PyTorch FSDP☆11Updated last year
- ☆46Updated 3 years ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆76Updated last year
- Do Multilingual Language Models Think Better in English?☆41Updated last year
- Mamba training library developed by kotoba technologies☆70Updated last year
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆55Updated 2 years ago
- ☆72Updated last year
- ☆41Updated last year
- ☆20Updated last year
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆26Updated last year
- LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation☆21Updated last year
- LTG-Bert☆32Updated last year
- 日本語マルチタスク言語理解ベンチマーク Japanese Massive Multitask Language Understanding Benchmark☆35Updated 5 months ago
- ☆52Updated 11 months ago
- 🔍 Multilingual Evaluation of English-Centric LLMs via Cross-Lingual Alignment☆10Updated last month
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- A toolkit for scaling law research ⚖☆49Updated 3 months ago
- ☆44Updated 4 years ago
- Code repo for "Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers" (ACL 2023)☆22Updated last year