project-etalon / etalonView external linksLinks
LLM Serving Performance Evaluation Harness
☆83Feb 25, 2025Updated 11 months ago
Alternatives and similar repositories for etalon
Users that are interested in etalon are comparing it to the libraries listed below
Sorting:
- A low-latency & high-throughput serving engine for LLMs☆474Jan 8, 2026Updated last month
- A large-scale simulation framework for LLM inference☆535Jul 25, 2025Updated 6 months ago
- ☆21Apr 17, 2025Updated 9 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆776Apr 6, 2025Updated 10 months ago
- ☆131Nov 11, 2024Updated last year
- ☆47Jun 27, 2024Updated last year
- ☆85Oct 17, 2025Updated 3 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆458May 30, 2025Updated 8 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆314Jun 10, 2025Updated 8 months ago
- Stateful LLM Serving☆95Mar 11, 2025Updated 11 months ago
- Open sourced backend for Martian's LLM Inference Provider Leaderboard☆21Aug 13, 2024Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆945Oct 29, 2025Updated 3 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆135Feb 22, 2024Updated last year
- Efficient and easy multi-instance LLM serving☆527Sep 3, 2025Updated 5 months ago
- Predict the performance of LLM inference services☆21Sep 18, 2025Updated 4 months ago
- ☆22Mar 7, 2025Updated 11 months ago
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆61Feb 21, 2025Updated 11 months ago
- Vocabulary Parallelism☆25Mar 10, 2025Updated 11 months ago
- FaaSNet: Scalable and Fast Provisioning of Custom Serverless Container Runtimes at Alibaba Cloud Function Compute (USENIX ATC'21)☆56Dec 7, 2021Updated 4 years ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆123Jul 4, 2025Updated 7 months ago
- Impactful systems For ML papers.☆10Aug 21, 2024Updated last year
- LLM serving cluster simulator☆135Apr 25, 2024Updated last year
- A model serving framework for various research and production scenarios. Seamlessly built upon the PyTorch and HuggingFace ecosystem.☆23Oct 11, 2024Updated last year
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆52Aug 6, 2025Updated 6 months ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆280Feb 2, 2026Updated last week
- ☆12Mar 5, 2025Updated 11 months ago
- Efficient LLM Inference Acceleration using Prompting☆51Oct 22, 2024Updated last year
- ☆13Jan 7, 2025Updated last year
- A Distributed Analysis and Benchmarking Framework for Apache OpenWhisk Serverless Platform☆12Dec 11, 2018Updated 7 years ago
- A benchmark suite for evaluating FaaS scheduler.☆23Nov 5, 2022Updated 3 years ago
- ☆26Aug 31, 2023Updated 2 years ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆25Nov 21, 2024Updated last year
- Efficient Interactive LLM Serving with Proxy Model-based Sequence Length Prediction | A tiny BERT model can tell you the verbosity of an …☆46Jun 1, 2024Updated last year
- Empowering everyone to create reliable and safety AI coding agent.☆12Sep 2, 2024Updated last year
- ☆33Jan 25, 2026Updated 3 weeks ago
- An evaluation framework for data center traffic engineering.☆13Jul 28, 2024Updated last year
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆120Mar 6, 2024Updated last year
- Source code of paper ''KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing''☆31Oct 24, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆13Updated this week