zhengzangw / Sequence-SchedulingLinks
PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".
☆93Updated 2 years ago
Alternatives and similar repositories for Sequence-Scheduling
Users that are interested in Sequence-Scheduling are comparing it to the libraries listed below
Sorting:
- ☆88Updated 3 years ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆66Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆203Updated last year
- 16-fold memory access reduction with nearly no loss☆109Updated 8 months ago
- A resilient distributed training framework☆96Updated last year
- ☆63Updated last year
- ☆58Updated last year
- ☆126Updated last year
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆33Updated last year
- ☆79Updated 2 months ago
- ☆84Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆172Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆133Updated last year
- ☆293Updated 5 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆357Updated 5 months ago
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆490Updated last year
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆215Updated 10 months ago
- ☆83Updated 8 months ago
- ATC23 AE☆47Updated 2 years ago
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆47Updated last year
- [NeurIPS 2024] The official implementation of "Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exitin…☆63Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆230Updated 2 years ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆265Updated 2 months ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆137Updated last month
- ☆144Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆121Updated 2 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆219Updated last year
- ☆348Updated last year
- ☆156Updated 5 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆282Updated 9 months ago