babylm / baseline-pretrainingLinks
Code for pre-training BabyLM baseline models.
☆16Updated 2 years ago
Alternatives and similar repositories for baseline-pretraining
Users that are interested in baseline-pretraining are comparing it to the libraries listed below
Sorting:
- Mamba training library developed by kotoba technologies☆70Updated last year
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆33Updated last year
- CycleQD is a framework for parameter space model merging.☆44Updated 8 months ago
- Checkpointable dataset utilities for foundation model training☆31Updated last year
- Ongoing research training Mixture of Expert models.☆21Updated last year
- ☆14Updated last year
- Experiments for efforts to train a new and improved t5☆75Updated last year
- ☆76Updated last year
- Swallowプロジェクト 事後学習済み大規模言語モデル 評価フレームワーク☆21Updated last month
- ☆62Updated last year
- Code repository for the c-BTM paper☆107Updated 2 years ago
- ☆57Updated 10 months ago
- ☆50Updated last year
- Japanese LLaMa experiment☆54Updated 10 months ago
- Mixtral-based Ja-En (En-Ja) Translation model☆19Updated 9 months ago
- ☆18Updated 10 months ago
- Example of using Epochraft to train HuggingFace transformers models with PyTorch FSDP☆11Updated last year
- Official implementation of "TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models"☆116Updated last week
- Swallowプロジェクト 大規模言語モデル 評価スクリプト☆22Updated last month
- Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.☆48Updated 8 months ago
- ☆33Updated last year