[NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank
☆72Nov 4, 2024Updated last year
Alternatives and similar repositories for vllm-ltr
Users that are interested in vllm-ltr are comparing it to the libraries listed below
Sorting:
- ☆20Jun 9, 2025Updated 9 months ago
- ☆87Oct 17, 2025Updated 4 months ago
- ☆131Nov 11, 2024Updated last year
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- Efficient Interactive LLM Serving with Proxy Model-based Sequence Length Prediction | A tiny BERT model can tell you the verbosity of an …☆47Jun 1, 2024Updated last year
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆92Updated this week
- APEX+ is an LLM Serving Simulator☆43Jun 16, 2025Updated 8 months ago
- ☆64Dec 3, 2024Updated last year
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆223May 31, 2025Updated 9 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Distributed SDDMM Kernel☆12Jul 8, 2022Updated 3 years ago
- A low-latency & high-throughput serving engine for LLMs☆482Jan 8, 2026Updated 2 months ago
- Code the ICML 2024 paper: "Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models"☆12Jun 25, 2024Updated last year
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆30Jun 14, 2024Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆778Apr 6, 2025Updated 11 months ago
- A large-scale simulation framework for LLM inference☆545Jul 25, 2025Updated 7 months ago
- Efficient and easy multi-instance LLM serving☆528Sep 3, 2025Updated 6 months ago
- A record of reading list on some MLsys popular topic☆22Mar 20, 2025Updated 11 months ago
- ☆12Dec 8, 2022Updated 3 years ago
- ☆12Oct 16, 2022Updated 3 years ago
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆18Jan 15, 2025Updated last year
- [MLSys 2023] Pre-train and Search: Efficient Embedding Table Sharding with Pre-trained Neural Cost Models☆16May 5, 2023Updated 2 years ago
- [NeurIPS 2025] Scaling Speculative Decoding with Lookahead Reasoning☆67Oct 31, 2025Updated 4 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆143Dec 4, 2024Updated last year
- The driver for LMCache core to run in vLLM☆62Feb 4, 2025Updated last year
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- TokenSim is a tool for simulating the behavior of large language models (LLMs) in a distributed environment.☆20Sep 20, 2025Updated 5 months ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆77Oct 15, 2025Updated 4 months ago
- ☆150Oct 9, 2024Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆947Oct 29, 2025Updated 4 months ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆92May 23, 2023Updated 2 years ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆413Nov 16, 2024Updated last year
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆241Feb 1, 2026Updated last month
- ☆38Sep 13, 2025Updated 5 months ago
- ☆47Apr 29, 2025Updated 10 months ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆34May 6, 2024Updated last year
- PaperHelper: Knowledge-Based LLM QA Paper Reading Assistant with Reliable References☆20Jun 13, 2024Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆234Sep 24, 2023Updated 2 years ago
- ☆30Dec 31, 2025Updated 2 months ago