microsoft / tokenweaveLinks
Accepted to MLSys 2026
☆70Updated last week
Alternatives and similar repositories for tokenweave
Users that are interested in tokenweave are comparing it to the libraries listed below
Sorting:
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆92Updated last week
- DeeperGEMM: crazy optimized version☆73Updated 9 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆161Updated 4 months ago
- ☆84Updated 3 months ago
- A lightweight design for computation-communication overlap.☆219Updated 2 weeks ago
- ☆65Updated 9 months ago
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆92Updated 3 weeks ago
- Stateful LLM Serving☆95Updated 10 months ago
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆90Updated last month
- Aims to implement dual-port and multi-qp solutions in deepEP ibrc transport☆73Updated 8 months ago
- High-performance distributed data shuffling (all-to-all) library for MoE training and inference☆111Updated last month
- Nex Venus Communication Library☆72Updated 2 months ago
- ☆51Updated 9 months ago
- Debug print operator for cudagraph debugging☆14Updated last year
- ☆41Updated 3 months ago
- ☆84Updated this week
- ☆88Updated 8 months ago
- ☆47Updated last year
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆53Updated 3 weeks ago
- ☆93Updated 10 months ago
- Tile-based language built for AI computation across all scales☆119Updated last week
- Microsoft Collective Communication Library☆66Updated last year
- An experimental communicating attention kernel based on DeepEP.☆35Updated 6 months ago
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆87Updated 2 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆71Updated 4 months ago
- Thunder Research Group's Collective Communication Library☆47Updated 7 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆82Updated last year
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆66Updated last month
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆209Updated last year