YJHMITWEB / ExFlowLinks
Explore Inter-layer Expert Affinity in MoE Model Inference
☆9Updated last year
Alternatives and similar repositories for ExFlow
Users that are interested in ExFlow are comparing it to the libraries listed below
Sorting:
- ☆54Updated last year
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆36Updated last month
- 16-fold memory access reduction with nearly no loss☆94Updated 2 months ago
- ☆62Updated 11 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆47Updated 2 months ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆40Updated 5 months ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆86Updated 2 years ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆39Updated last month
- LLM Inference analyzer for different hardware platforms☆69Updated last week
- ATC23 AE☆45Updated 2 years ago
- ☆99Updated 6 months ago
- ☆76Updated last month
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆19Updated last year
- ☆21Updated last year
- ☆105Updated 7 months ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆25Updated last year
- A lightweight design for computation-communication overlap.☆132Updated 3 weeks ago
- [NeurIPS 2024] The official implementation of "Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exitin…☆57Updated 11 months ago
- ☆146Updated 10 months ago
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆20Updated 7 months ago
- Stateful LLM Serving☆70Updated 2 months ago
- ☆59Updated last month
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆211Updated last year
- Code release for AdapMoE accepted by ICCAD 2024☆25Updated last month
- Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Honorable Mention]☆10Updated 2 months ago
- ☆73Updated 4 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆136Updated 10 months ago
- ☆50Updated 6 months ago
- The Official Implementation of Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference☆76Updated 4 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆107Updated 2 weeks ago