pku-liang / ArkVale
ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)
☆26Updated 2 months ago
Alternatives and similar repositories for ArkVale:
Users that are interested in ArkVale are comparing it to the libraries listed below
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆50Updated 8 months ago
- ☆52Updated 10 months ago
- ☆12Updated 8 months ago
- LLM Inference analyzer for different hardware platforms☆52Updated 3 weeks ago
- Curated collection of papers in MoE model inference☆64Updated this week
- ☆50Updated 8 months ago
- Stateful LLM Serving☆46Updated 6 months ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆23Updated 2 months ago
- ThrillerFlow is a Dataflow Analysis and Codegen Framework written in Rust.☆14Updated 2 months ago
- Compiler for Dynamic Neural Networks☆45Updated last year
- LLM serving cluster simulator☆92Updated 9 months ago
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆31Updated 2 weeks ago
- ☆72Updated 3 years ago
- ☆98Updated last month
- ☆83Updated 3 months ago
- SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆15Updated 4 months ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆20Updated 9 months ago
- Explore Inter-layer Expert Affinity in MoE Model Inference☆7Updated 9 months ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆17Updated last year
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆39Updated 11 months ago
- Code release for AdapMoE accepted by ICCAD 2024☆11Updated 3 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆106Updated 7 months ago
- ☆112Updated 7 months ago
- ☆41Updated 2 months ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆39Updated 3 months ago
- ☆77Updated last month
- 16-fold memory access reduction with nearly no loss☆76Updated 3 months ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆132Updated this week
- ☆11Updated 7 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆138Updated 7 months ago