microsoft / tokenweaveLinks
Efficient Compute-Communication Overlap for Distributed LLM Inference
☆43Updated last week
Alternatives and similar repositories for tokenweave
Users that are interested in tokenweave are comparing it to the libraries listed below
Sorting:
- ☆42Updated 4 months ago
- ☆71Updated last year
- Stateful LLM Serving☆84Updated 6 months ago
- ☆25Updated 2 years ago
- ☆51Updated this week
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆25Updated 10 months ago
- A framework for generating realistic LLM serving workloads☆58Updated 3 months ago
- kvcached: Elastic KV cache for dynamic GPU sharing and efficient multi-LLM inference.☆91Updated this week
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆54Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆181Updated last year
- Microsoft Collective Communication Library☆66Updated 9 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆129Updated last year
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆62Updated this week
- ☆46Updated 9 months ago
- DeeperGEMM: crazy optimized version☆70Updated 4 months ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆31Updated 7 months ago
- FractalTensor is a programming framework that introduces a novel approach to organizing data in deep neural networks (DNNs) as a list of …☆27Updated 9 months ago
- Scalable long-context LLM decoding that leverages sparsity—by treating the KV cache as a vector storage system.☆80Updated last month
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆47Updated last month
- A resilient distributed training framework☆95Updated last year
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆40Updated 2 years ago
- ☆19Updated 11 months ago
- LLM Serving Performance Evaluation Harness☆79Updated 6 months ago
- SOTA Learning-augmented Systems☆37Updated 3 years ago
- ☆130Updated 11 months ago
- Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Award, Honorable Mention]☆22Updated 6 months ago
- A lightweight design for computation-communication overlap.☆167Updated last week
- Tile-based language built for AI computation across all scales☆57Updated last week
- ☆135Updated 2 months ago
- A simple calculation for LLM MFU.☆44Updated last week