MoonshotAI / checkpoint-engineLinks
Checkpoint-engine is a simple middleware to update model weights in LLM inference engines
☆851Updated last week
Alternatives and similar repositories for checkpoint-engine
Users that are interested in checkpoint-engine are comparing it to the libraries listed below
Sorting:
- An early research stage MoE load balancer based on inear programming.☆415Updated 2 weeks ago
- ☆917Updated last month
- ☆317Updated this week
- Scalable toolkit for efficient model reinforcement☆1,048Updated last week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆327Updated this week
- PyTorch-native post-training at scale☆549Updated last week
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆254Updated this week
- LLM KV cache compression made easy☆701Updated this week
- Async RL Training at Scale☆867Updated this week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆456Updated 6 months ago
- ☆1,215Updated 2 weeks ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆498Updated last week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆254Updated last week
- Perplexity GPU Kernels☆534Updated 3 weeks ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,287Updated last week
- ☆1,242Updated 2 weeks ago
- Efficient LLM Inference over Long Sequences☆392Updated 5 months ago
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆187Updated last week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆234Updated this week
- Materials for learning SGLang☆658Updated last week
- Tile-Based Runtime for Ultra-Low-Latency LLM Inference☆261Updated last week
- HuggingFace conversion and training library for Megatron-based models☆228Updated this week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆691Updated this week
- A framework for efficient model inference with omni-modality models☆466Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA (+ more DSLs)☆683Updated this week
- ☆439Updated 3 months ago
- torchcomms: a modern PyTorch communications API☆295Updated last week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆313Updated last month
- A construction kit for reinforcement learning environment management.☆226Updated this week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆507Updated 9 months ago