babylm / baseline-pretrainingLinks
Code for pre-training BabyLM baseline models.
☆15Updated 2 years ago
Alternatives and similar repositories for baseline-pretraining
Users that are interested in baseline-pretraining are comparing it to the libraries listed below
Sorting:
- Checkpointable dataset utilities for foundation model training☆32Updated last year
- Mamba training library developed by kotoba technologies☆71Updated last year
- ☆53Updated 6 months ago
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆32Updated last year
- ☆74Updated last year
- ☆46Updated 3 years ago
- LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation☆21Updated last year
- ☆17Updated 6 months ago
- ☆14Updated last year
- 日本語マルチタスク言語理解ベンチマーク Japanese Massive Multitask Language Understanding Benchmark☆36Updated 6 months ago
- List of papers on Self-Correction of LLMs.☆73Updated 5 months ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆58Updated last year
- ☆49Updated last year
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆76Updated last year
- CausalGym: Benchmarking causal interpretability methods on linguistic tasks☆43Updated 6 months ago
- Ongoing Research Project for continaual pre-training LLM(dense mode)☆42Updated 3 months ago
- Code for Zero-Shot Tokenizer Transfer☆131Updated 5 months ago
- ☆33Updated 10 months ago
- Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.☆44Updated 4 months ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆72Updated last year
- Code repository for the paper "Mission: Impossible Language Models."☆52Updated last month
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆54Updated 2 years ago
- Experiments for efforts to train a new and improved t5☆77Updated last year
- ☆11Updated 3 years ago
- Japanese LLaMa experiment☆53Updated 6 months ago
- CycleQD is a framework for parameter space model merging.☆40Updated 4 months ago
- ☆60Updated last year
- Repository for the code of the "PPL-MCTS: Constrained Textual Generation Through Discriminator-Guided Decoding" paper, NAACL'22☆66Updated 2 years ago