babylm / baseline-pretraining
Code for pre-training BabyLM baseline models.
☆13Updated last year
Alternatives and similar repositories for baseline-pretraining:
Users that are interested in baseline-pretraining are comparing it to the libraries listed below
- Checkpointable dataset utilities for foundation model training☆32Updated last year
- ☆14Updated 11 months ago
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆31Updated last year
- Mamba training library developed by kotoba technologies☆68Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Evaluation pipeline for the BabyLM Challenge 2023.☆75Updated last year
- LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation☆21Updated 11 months ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆71Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- List of papers on Self-Correction of LLMs.☆71Updated 3 months ago
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- 日本語マルチタスク言語理解ベンチマーク Japanese Massive Multitask Language Understanding Benchmark☆33Updated 3 months ago
- ☆10Updated 10 months ago
- ☆42Updated last year
- Example of using Epochraft to train HuggingFace transformers models with PyTorch FSDP☆12Updated last year
- CycleQD is a framework for parameter space model merging.☆35Updated last month
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- Example code for prefix-tuning GPT/GPT-NeoX models and for inference with trained prefixes☆12Updated 2 years ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 2 years ago
- ☆46Updated 2 years ago
- ☆73Updated 11 months ago
- sigma-MoE layer☆18Updated last year
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆55Updated 2 years ago
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆42Updated last year
- ☆49Updated 11 months ago
- ☆46Updated 3 months ago
- Ongoing Research Project for continaual pre-training LLM(dense mode)☆38Updated 3 weeks ago
- ☆15Updated 3 months ago