MoonshotAI / checkpoint-engineLinks
Checkpoint-engine is a simple middleware to update model weights in LLM inference engines
☆820Updated this week
Alternatives and similar repositories for checkpoint-engine
Users that are interested in checkpoint-engine are comparing it to the libraries listed below
Sorting:
- ☆894Updated last week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆299Updated this week
- Scalable toolkit for efficient model reinforcement☆1,009Updated last week
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆251Updated 4 months ago
- PyTorch-native post-training at scale☆509Updated this week
- ☆1,073Updated 2 weeks ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆460Updated last week
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,170Updated this week
- Perplexity GPU Kernels☆528Updated last week
- LLM KV cache compression made easy☆680Updated this week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆450Updated 5 months ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆229Updated this week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆241Updated this week
- Efficient LLM Inference over Long Sequences☆390Updated 4 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA (+ more DSLs)☆655Updated this week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆628Updated this week
- Materials for learning SGLang☆636Updated 2 weeks ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆302Updated last week
- Async RL Training at Scale☆749Updated this week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆499Updated 9 months ago
- ☆973Updated last month
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆203Updated 5 months ago
- Training library for Megatron-based models☆174Updated this week
- Common recipes to run vLLM☆214Updated last week
- Muon is Scalable for LLM Training☆1,354Updated 3 months ago
- ☆431Updated 3 months ago
- kernels, of the mega variety☆597Updated last month
- slime is an LLM post-training framework for RL Scaling.☆2,407Updated last week
- A throughput-oriented high-performance serving framework for LLMs☆912Updated 2 weeks ago
- Speed Always Wins: A Survey on Efficient Architectures for Large Language Models☆357Updated this week