[ICML 2024] Serving LLMs on heterogeneous decentralized clusters.
☆34May 6, 2024Updated last year
Alternatives and similar repositories for HexGen
Users that are interested in HexGen are comparing it to the libraries listed below
Sorting:
- Accommodating Large Language Model Training over Heterogeneous Environment.☆25Mar 13, 2025Updated last year
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆78Oct 15, 2025Updated 5 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Feb 22, 2024Updated 2 years ago
- An interference-aware scheduler for fine-grained GPU sharing☆161Nov 26, 2025Updated 3 months ago
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆37Aug 29, 2025Updated 6 months ago
- A large-scale simulation framework for LLM inference☆556Jul 25, 2025Updated 7 months ago
- ☆47Jun 27, 2024Updated last year
- A resilient distributed training framework☆97Apr 11, 2024Updated last year
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Feb 23, 2024Updated 2 years ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆12Mar 7, 2024Updated 2 years ago
- Disaggregated serving system for Large Language Models (LLMs).☆785Apr 6, 2025Updated 11 months ago
- This repository is established to store personal notes and annotated papers during daily research.☆187Updated this week
- ☆27Aug 31, 2023Updated 2 years ago
- Galvatron is an automatic distributed training system designed for Transformer models, including Large Language Models (LLMs). If you hav…☆23Oct 22, 2025Updated 4 months ago
- ☆131Nov 11, 2024Updated last year
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆24Mar 1, 2024Updated 2 years ago
- Repository for the COLM 2025 paper SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths☆16Jul 10, 2025Updated 8 months ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆55Dec 11, 2022Updated 3 years ago
- ☆24Aug 15, 2023Updated 2 years ago
- A low-latency & high-throughput serving engine for LLMs☆484Jan 8, 2026Updated 2 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆182Jul 10, 2024Updated last year
- Host CIFAR-10.2 Data Set☆13Sep 22, 2021Updated 4 years ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆75Nov 4, 2024Updated last year
- APEX+ is an LLM Serving Simulator☆44Jun 16, 2025Updated 9 months ago
- ☆34Apr 8, 2025Updated 11 months ago
- ☆17May 10, 2024Updated last year
- ☆87Oct 17, 2025Updated 5 months ago
- DuoDecoding: Hardware-aware Heterogeneous Speculative Decoding with Dynamic Multi-Sequence Drafting☆17Mar 4, 2025Updated last year
- ☆19May 4, 2023Updated 2 years ago
- A throughput-oriented high-performance serving framework for LLMs☆949Oct 29, 2025Updated 4 months ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆92May 23, 2023Updated 2 years ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆94Jul 14, 2023Updated 2 years ago
- ☆26Mar 14, 2024Updated 2 years ago
- Efficient and easy multi-instance LLM serving☆532Mar 12, 2026Updated last week
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆44Nov 4, 2022Updated 3 years ago
- ☆13Feb 22, 2023Updated 3 years ago
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆137Jul 25, 2024Updated last year
- ☆13Jan 28, 2026Updated last month
- ☆25Mar 15, 2023Updated 3 years ago