MoonshotAI / checkpoint-engineLinks
Checkpoint-engine is a simple middleware to update model weights in LLM inference engines
☆751Updated this week
Alternatives and similar repositories for checkpoint-engine
Users that are interested in checkpoint-engine are comparing it to the libraries listed below
Sorting:
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆267Updated this week
- Scalable toolkit for efficient model reinforcement☆910Updated this week
- ☆773Updated 3 weeks ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆248Updated 2 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆906Updated this week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆443Updated 4 months ago
- Post-training with Tinker☆550Updated this week
- Async RL Training at Scale☆650Updated last week
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆197Updated 4 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆280Updated last month
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆219Updated this week
- Efficient LLM Inference over Long Sequences☆391Updated 3 months ago
- ☆683Updated this week
- Simple & Scalable Pretraining for Neural Architecture Research☆296Updated last month
- Common recipes to run vLLM☆146Updated this week
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆125Updated last month
- LLM KV cache compression made easy☆623Updated this week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆214Updated last week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆581Updated 2 weeks ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆491Updated 7 months ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆412Updated this week
- Perplexity GPU Kernels☆476Updated 2 weeks ago
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆602Updated 6 months ago
- slime is an LLM post-training framework for RL Scaling.☆2,023Updated this week
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆407Updated last week
- A project to improve skills of large language models☆568Updated this week
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆238Updated 9 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆343Updated 9 months ago
- Tina: Tiny Reasoning Models via LoRA☆284Updated last week
- ☆202Updated 2 weeks ago