MoonshotAI / checkpoint-engineLinks
Checkpoint-engine is a simple middleware to update model weights in LLM inference engines
☆902Updated this week
Alternatives and similar repositories for checkpoint-engine
Users that are interested in checkpoint-engine are comparing it to the libraries listed below
Sorting:
- Miles is an enterprise-facing reinforcement learning framework for LLM and VLM post-training, forked from and co-evolving with slime.☆789Updated last week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆384Updated this week
- ☆957Updated 3 months ago
- Block Diffusion for Ultra-Fast Speculative Decoding☆432Updated last week
- An early research stage expert-parallel load balancer for MoE models based on linear programming.☆491Updated 2 months ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆268Updated last month
- OpenTinker is an RL-as-a-Service infrastructure for foundation models☆618Updated last week
- PyTorch-native post-training at scale☆605Updated this week
- Scalable toolkit for efficient model reinforcement☆1,267Updated last week
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,518Updated this week
- LLM KV cache compression made easy☆866Updated this week
- Async RL Training at Scale☆1,034Updated this week
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆266Updated this week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆469Updated 8 months ago
- ☆1,278Updated 2 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆272Updated last week
- Accelerating MoE with IO and Tile-aware Optimizations☆563Updated 2 weeks ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆659Updated last week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆260Updated this week
- Perplexity GPU Kernels☆554Updated 2 months ago
- Training library for Megatron-based models with bidirectional Hugging Face conversion capability☆400Updated this week
- Tile-Based Runtime for Ultra-Low-Latency LLM Inference☆557Updated last week
- Efficient LLM Inference over Long Sequences☆394Updated 7 months ago
- Materials for learning SGLang☆728Updated 3 weeks ago
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆765Updated 3 weeks ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆220Updated this week
- dInfer: An Efficient Inference Framework for Diffusion Language Models☆403Updated 3 weeks ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆523Updated 11 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆781Updated 2 weeks ago
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆218Updated 8 months ago