LLM Serving Performance Evaluation Harness
☆85Feb 25, 2025Updated last year
Alternatives and similar repositories for etalon
Users that are interested in etalon are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A low-latency & high-throughput serving engine for LLMs☆496Jan 8, 2026Updated 4 months ago
- Accurate, large-scale, and extensible simulator for LLM inference Systems☆595Jul 25, 2025Updated 9 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆807Apr 6, 2025Updated last year
- ☆47Jun 27, 2024Updated last year
- ☆133Nov 11, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆89Oct 17, 2025Updated 6 months ago
- ☆22Apr 17, 2025Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Feb 22, 2024Updated 2 years ago
- Stateful LLM Serving☆99Mar 11, 2025Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆482May 30, 2025Updated 11 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆328Jun 10, 2025Updated 10 months ago
- A throughput-oriented high-performance serving framework for LLMs☆956Mar 29, 2026Updated last month
- Efficient and easy multi-instance LLM serving☆547Mar 12, 2026Updated last month
- An evaluation framework for data center traffic engineering.☆14Jul 28, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- LLM serving cluster simulator☆150Apr 25, 2024Updated 2 years ago
- An Open-Source SCAlable Interface for ISA Extensionsfor RISC-V Processors. New Version:☆17Feb 29, 2024Updated 2 years ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆122Mar 6, 2024Updated 2 years ago
- ☆15Nov 7, 2024Updated last year
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year
- Baidu Hook☆13Jan 7, 2016Updated 10 years ago
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆66Feb 21, 2025Updated last year
- ☆15Aug 12, 2023Updated 2 years ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆53Aug 6, 2025Updated 9 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Example of applying CUDA graphs to LLaMA-v2☆11Aug 25, 2023Updated 2 years ago
- Low-Latency Live Video Streaming over a Low-Earth-Orbit Satellite Network with DASH☆18Sep 6, 2024Updated last year
- ☆12Mar 16, 2022Updated 4 years ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆303Updated this week
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆125Jul 4, 2025Updated 10 months ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆85Oct 15, 2025Updated 6 months ago
- RISC-V ISA based 32-bit processor written in HLS☆16Nov 7, 2019Updated 6 years ago
- [TBD] "m4: A Learned Flow-level Network Simulator" by Chenning Li, Anton A. Zabreyko, Om Chabra, Arash Nasr-Esfahany, Kevin Zhao, Pratees…☆18Apr 27, 2026Updated last week
- A Distributed Analysis and Benchmarking Framework for Apache OpenWhisk Serverless Platform☆12Dec 11, 2018Updated 7 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Analysis for the traces from byteprofile☆32Nov 21, 2023Updated 2 years ago
- ☆27Aug 31, 2023Updated 2 years ago
- NeuroSpector: Dataflow and Mapping Optimizer for Deep Neural Network Accelerators☆21Mar 20, 2025Updated last year
- [ICLR 2021] CompOFA: Compound Once-For-All Networks For Faster Multi-Platform Deployment☆25Jan 5, 2023Updated 3 years ago
- A benchmark suite for evaluating FaaS scheduler.☆23Nov 5, 2022Updated 3 years ago
- ☆23Mar 7, 2025Updated last year
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆39Aug 29, 2025Updated 8 months ago