deepseek-ai / LPLBLinks
An early research stage MoE load balancer based on inear programming.
☆228Updated this week
Alternatives and similar repositories for LPLB
Users that are interested in LPLB are comparing it to the libraries listed below
Sorting:
- Tile-Based Runtime for Ultra-Low-Latency LLM Inference☆178Updated this week
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆385Updated last week
- torchcomms: a modern PyTorch communications API☆291Updated this week
- JAX backend for SGL☆175Updated this week
- Perplexity open source garden for inference technology☆232Updated this week
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆252Updated 4 months ago
- How to ensure correctness and ship LLM generated kernels in PyTorch☆121Updated last week
- Perplexity GPU Kernels☆529Updated 2 weeks ago
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆89Updated 5 months ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆171Updated last week
- TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators☆95Updated 5 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆300Updated this week
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆154Updated last month
- ☆109Updated 6 months ago
- ☆316Updated last week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆230Updated last week
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆259Updated last month
- ☆79Updated last month
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆286Updated this week
- DeeperGEMM: crazy optimized version☆73Updated 6 months ago
- [NeurIPS 2025] Scaling Speculative Decoding with Lookahead Reasoning☆52Updated 3 weeks ago
- KV cache compression for high-throughput LLM inference☆143Updated 9 months ago
- Allow torch tensor memory to be released and resumed later☆167Updated last week
- ☆93Updated last year
- Autonomous GPU Kernel Generation via Deep Agents☆137Updated this week
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆196Updated last year
- A lightweight design for computation-communication overlap.☆187Updated last month
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆63Updated 2 months ago
- Collection of kernels written in Triton language☆167Updated 7 months ago
- 🔥 LLM-powered GPU kernel synthesis: Train models to convert PyTorch ops into optimized Triton kernels via SFT+RL. Multi-turn compilation…☆99Updated last week