babylm / baseline-pretrainingLinks
Code for pre-training BabyLM baseline models.
☆16Updated 2 years ago
Alternatives and similar repositories for baseline-pretraining
Users that are interested in baseline-pretraining are comparing it to the libraries listed below
Sorting:
- Mamba training library developed by kotoba technologies☆70Updated last year
- ☆14Updated last year
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆33Updated last year
- Checkpointable dataset utilities for foundation model training☆31Updated last year
- CycleQD is a framework for parameter space model merging.☆44Updated 9 months ago
- Ongoing research training Mixture of Expert models.☆21Updated last year
- ☆61Updated last year
- Example of using Epochraft to train HuggingFace transformers models with PyTorch FSDP☆11Updated last year
- Japanese LLaMa experiment☆53Updated 3 weeks ago
- Code for Discovering Preference Optimization Algorithms with and for Large Language Models☆64Updated last year
- Code repository for the c-BTM paper☆107Updated 2 years ago
- Experiments for efforts to train a new and improved t5☆75Updated last year
- Train, tune, and infer Bamba model☆135Updated 5 months ago
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆27Updated 2 years ago
- Swallowプロジェクト 事後学習済み大規模言語モデル 評価フレームワーク☆23Updated 3 weeks ago
- ☆76Updated last year
- ☆42Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- ☆41Updated last year
- Token Omission Via Attention☆127Updated last year
- ☆18Updated 11 months ago
- ☆50Updated last year
- Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.☆51Updated 9 months ago
- Swallowプロジェクト 大規模言語モデル 評価スクリプト☆22Updated last month
- ☆57Updated 11 months ago
- some common Huggingface transformers in maximal update parametrization (µP)☆86Updated 3 years ago
- Official implementation of "TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models"☆118Updated last month
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated 2 years ago
- 日本語マルチタスク言語理解ベンチマーク Japanese Massive Multitask Language Understanding Benchmark☆36Updated last month
- List of papers on Self-Correction of LLMs.☆80Updated 10 months ago