character-ai / pipelining-sftLinks
Simple and efficient DeepSeek V3 SFT using pipeline parallel and expert parallel, with both FP8 and BF16 trainings
☆101Updated 4 months ago
Alternatives and similar repositories for pipelining-sft
Users that are interested in pipelining-sft are comparing it to the libraries listed below
Sorting:
- Storing long contexts in tiny caches with self-study☆218Updated this week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆257Updated this week
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Memory optimized Mixture of Experts☆69Updated 4 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆265Updated this week
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆194Updated this week
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆322Updated this week
- Simple high-throughput inference library☆150Updated 6 months ago
- ☆344Updated this week
- Simple & Scalable Pretraining for Neural Architecture Research☆304Updated last month
- ☆48Updated last year
- ☆53Updated last year
- Streamline on-policy/off-policy distillation workflows in a few lines of code☆67Updated this week
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated last year
- DPO, but faster 🚀☆46Updated last year
- Load compute kernels from the Hub☆348Updated this week
- ☆47Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆100Updated 6 months ago
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- Train your own SOTA deductive reasoning model☆107Updated 9 months ago
- ☆31Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 10 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆320Updated last month
- 👷 Build compute kernels☆192Updated this week
- PyTorch implementation of models from the Zamba2 series.☆186Updated 10 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆111Updated 7 months ago
- Benchmark suite for LLMs from Fireworks.ai☆84Updated 2 weeks ago
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆225Updated last week
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆151Updated last year
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆99Updated 4 months ago