deepseek-ai / LPLBLinks
An early research stage expert-parallel load balancer for MoE models based on linear programming.
☆491Updated 2 months ago
Alternatives and similar repositories for LPLB
Users that are interested in LPLB are comparing it to the libraries listed below
Sorting:
- Tile-Based Runtime for Ultra-Low-Latency LLM Inference☆557Updated this week
- Accelerating MoE with IO and Tile-aware Optimizations☆563Updated last week
- torchcomms: a modern PyTorch communications API☆323Updated this week
- Perplexity GPU Kernels☆554Updated 2 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆379Updated this week
- Perplexity open source garden for inference technology☆350Updated last month
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆459Updated 3 weeks ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆264Updated last month
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆891Updated last week
- Block Diffusion for Ultra-Fast Speculative Decoding☆432Updated this week
- High-performance distributed data shuffling (all-to-all) library for MoE training and inference☆109Updated 3 weeks ago
- ☆340Updated 3 weeks ago
- Autonomous GPU Kernel Generation via Deep Agents☆223Updated this week
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆80Updated last month
- Code for data-aware compression of DeepSeek models☆69Updated last month
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆252Updated last week
- JAX backend for SGL☆232Updated this week
- Helpful kernel tutorials and examples for tile-based GPU programming☆592Updated last week
- Miles is an enterprise-facing reinforcement learning framework for LLM and VLM post-training, forked from and co-evolving with slime.☆789Updated this week
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆276Updated 3 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆207Updated last year
- Allow torch tensor memory to be released and resumed later☆207Updated 2 weeks ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆186Updated last week
- dInfer: An Efficient Inference Framework for Diffusion Language Models☆403Updated 3 weeks ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆309Updated 7 months ago
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆247Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆457Updated 8 months ago
- Distributed MoE in a Single Kernel [NeurIPS '25]☆188Updated this week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆137Updated last year
- Efficient LLM Inference over Long Sequences☆394Updated 7 months ago