LLM Serving Performance Evaluation Harness
☆83Feb 25, 2025Updated last year
Alternatives and similar repositories for etalon
Users that are interested in etalon are comparing it to the libraries listed below
Sorting:
- A low-latency & high-throughput serving engine for LLMs☆482Jan 8, 2026Updated 2 months ago
- A large-scale simulation framework for LLM inference☆545Jul 25, 2025Updated 7 months ago
- ☆21Apr 17, 2025Updated 10 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆778Apr 6, 2025Updated 11 months ago
- ☆131Nov 11, 2024Updated last year
- ☆47Jun 27, 2024Updated last year
- ☆87Oct 17, 2025Updated 4 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆465May 30, 2025Updated 9 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆315Jun 10, 2025Updated 8 months ago
- Stateful LLM Serving☆97Mar 11, 2025Updated 11 months ago
- Open sourced backend for Martian's LLM Inference Provider Leaderboard☆21Aug 13, 2024Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆947Oct 29, 2025Updated 4 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆135Feb 22, 2024Updated 2 years ago
- Efficient and easy multi-instance LLM serving☆528Sep 3, 2025Updated 6 months ago
- ☆23Mar 7, 2025Updated last year
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆64Feb 21, 2025Updated last year
- Vocabulary Parallelism☆25Mar 10, 2025Updated 11 months ago
- FaaSNet: Scalable and Fast Provisioning of Custom Serverless Container Runtimes at Alibaba Cloud Function Compute (USENIX ATC'21)☆56Dec 7, 2021Updated 4 years ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆122Jul 4, 2025Updated 8 months ago
- Impactful systems For ML papers.☆10Aug 21, 2024Updated last year
- LLM serving cluster simulator☆135Apr 25, 2024Updated last year
- A model serving framework for various research and production scenarios. Seamlessly built upon the PyTorch and HuggingFace ecosystem.☆23Oct 11, 2024Updated last year
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆53Aug 6, 2025Updated 7 months ago
- A Distributed Analysis and Benchmarking Framework for Apache OpenWhisk Serverless Platform☆12Dec 11, 2018Updated 7 years ago
- Efficient LLM Inference Acceleration using Prompting☆51Oct 22, 2024Updated last year
- ☆13Jan 7, 2025Updated last year
- ☆12Mar 5, 2025Updated last year
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆285Updated this week
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year
- ☆26Aug 31, 2023Updated 2 years ago
- A benchmark suite for evaluating FaaS scheduler.☆23Nov 5, 2022Updated 3 years ago
- Efficient Interactive LLM Serving with Proxy Model-based Sequence Length Prediction | A tiny BERT model can tell you the verbosity of an …☆47Jun 1, 2024Updated last year
- An evaluation framework for data center traffic engineering.☆14Jul 28, 2024Updated last year
- Empowering everyone to create reliable and safety AI coding agent.☆12Sep 2, 2024Updated last year
- ☆34Jan 25, 2026Updated last month
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆120Mar 6, 2024Updated 2 years ago
- Source code of paper ''KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing''☆31Oct 24, 2024Updated last year
- ☆15Nov 7, 2024Updated last year
- Example of applying CUDA graphs to LLaMA-v2☆12Aug 25, 2023Updated 2 years ago