hao-ai-lab / vllm-ltrView external linksLinks
[NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank
☆70Nov 4, 2024Updated last year
Alternatives and similar repositories for vllm-ltr
Users that are interested in vllm-ltr are comparing it to the libraries listed below
Sorting:
- ☆20Jun 9, 2025Updated 8 months ago
- ☆85Oct 17, 2025Updated 3 months ago
- ☆131Nov 11, 2024Updated last year
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆32Nov 29, 2024Updated last year
- Efficient Interactive LLM Serving with Proxy Model-based Sequence Length Prediction | A tiny BERT model can tell you the verbosity of an …☆46Jun 1, 2024Updated last year
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆89Jan 29, 2026Updated 2 weeks ago
- APEX+ is an LLM Serving Simulator☆42Jun 16, 2025Updated 8 months ago
- ☆64Dec 3, 2024Updated last year
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆220May 31, 2025Updated 8 months ago
- Code the ICML 2024 paper: "Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models"☆11Jun 25, 2024Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- A low-latency & high-throughput serving engine for LLMs☆474Jan 8, 2026Updated last month
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆30Jun 14, 2024Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆776Apr 6, 2025Updated 10 months ago
- A large-scale simulation framework for LLM inference☆535Jul 25, 2025Updated 6 months ago
- Efficient and easy multi-instance LLM serving☆527Sep 3, 2025Updated 5 months ago
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆19Jan 15, 2025Updated last year
- ☆12Dec 8, 2022Updated 3 years ago
- ☆12Oct 16, 2022Updated 3 years ago
- [MLSys 2023] Pre-train and Search: Efficient Embedding Table Sharding with Pre-trained Neural Cost Models☆16May 5, 2023Updated 2 years ago
- A record of reading list on some MLsys popular topic☆21Mar 20, 2025Updated 10 months ago
- The driver for LMCache core to run in vLLM☆60Feb 4, 2025Updated last year
- [NeurIPS 2025] Scaling Speculative Decoding with Lookahead Reasoning☆65Oct 31, 2025Updated 3 months ago
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆76Oct 15, 2025Updated 4 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆142Dec 4, 2024Updated last year
- TokenSim is a tool for simulating the behavior of large language models (LLMs) in a distributed environment.☆20Sep 20, 2025Updated 4 months ago
- ☆151Oct 9, 2024Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆945Oct 29, 2025Updated 3 months ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆93May 23, 2023Updated 2 years ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆411Nov 16, 2024Updated last year
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆238Feb 1, 2026Updated 2 weeks ago
- ☆35Sep 13, 2025Updated 5 months ago
- ☆47Apr 29, 2025Updated 9 months ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆34May 6, 2024Updated last year
- PaperHelper: Knowledge-Based LLM QA Paper Reading Assistant with Reliable References☆20Jun 13, 2024Updated last year
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."☆18Dec 13, 2024Updated last year
- ☆22Apr 22, 2024Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆643Jan 15, 2026Updated last month