microsoft / tokenweaveLinks
Efficient Compute-Communication Overlap for Distributed LLM Inference
☆62Updated 3 weeks ago
Alternatives and similar repositories for tokenweave
Users that are interested in tokenweave are comparing it to the libraries listed below
Sorting:
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆82Updated this week
- ☆43Updated 6 months ago
- ☆79Updated last month
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆143Updated 2 months ago
- A lightweight design for computation-communication overlap.☆187Updated last month
- DeeperGEMM: crazy optimized version☆73Updated 6 months ago
- ☆46Updated 11 months ago
- Stateful LLM Serving☆88Updated 8 months ago
- Aims to implement dual-port and multi-qp solutions in deepEP ibrc transport☆66Updated 6 months ago
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆49Updated this week
- Tile-based language built for AI computation across all scales☆80Updated last week
- ☆65Updated 6 months ago
- Nex Venus Communication Library☆50Updated this week
- A simple calculation for LLM MFU.☆50Updated 2 months ago
- ☆57Updated last week
- ☆64Updated 5 months ago
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆76Updated last week
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆89Updated 5 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆59Updated 7 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆85Updated 2 months ago
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆49Updated last month
- FractalTensor is a programming framework that introduces a novel approach to organizing data in deep neural networks (DNNs) as a list of …☆29Updated 11 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆115Updated 6 months ago
- ☆81Updated 7 months ago
- GPTQ inference TVM kernel☆39Updated last year
- Scalable long-context LLM decoding that leverages sparsity—by treating the KV cache as a vector storage system.☆98Updated 2 months ago
- Microsoft Collective Communication Library☆66Updated 11 months ago
- ☆90Updated 7 months ago
- ☆19Updated last year
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆69Updated 5 months ago