character-ai / pipelining-sftLinks
Simple and efficient DeepSeek V3 SFT using pipeline parallel and expert parallel, with both FP8 and BF16 trainings
☆114Updated 5 months ago
Alternatives and similar repositories for pipelining-sft
Users that are interested in pipelining-sft are comparing it to the libraries listed below
Sorting:
- Memory optimized Mixture of Experts☆72Updated 5 months ago
- Storing long contexts in tiny caches with self-study☆231Updated last month
- Simple & Scalable Pretraining for Neural Architecture Research☆306Updated last month
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆346Updated 3 weeks ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆277Updated this week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆269Updated this week
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆106Updated 8 months ago
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆256Updated last week
- DPO, but faster 🚀☆46Updated last year
- Train your own SOTA deductive reasoning model☆107Updated 10 months ago
- LM engine is a library for pretraining/finetuning LLMs☆110Updated last week
- EvaByte: Efficient Byte-level Language Models at Scale☆114Updated 8 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 3 months ago
- Simple high-throughput inference library☆155Updated 8 months ago
- ☆48Updated last year
- A collection of lightweight interpretability scripts to understand how LLMs think☆88Updated this week
- ☆39Updated last year
- ☆136Updated 9 months ago
- ☆54Updated last year
- Streamline on-policy/off-policy distillation workflows in a few lines of code☆91Updated this week
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆65Updated 8 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆109Updated 10 months ago
- Verifiers for LLM Reinforcement Learning☆80Updated 9 months ago
- MoE training for Me and You and maybe other people☆319Updated 2 weeks ago
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆245Updated this week
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆190Updated 10 months ago
- Replicating O1 inference-time scaling laws☆90Updated last year
- ☆31Updated last year
- accompanying material for sleep-time compute paper☆118Updated 8 months ago