[ICML 2024] Serving LLMs on heterogeneous decentralized clusters.
☆36May 6, 2024Updated last year
Alternatives and similar repositories for HexGen
Users that are interested in HexGen are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Accommodating Large Language Model Training over Heterogeneous Environment.☆28Mar 13, 2025Updated last year
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆83Oct 15, 2025Updated 6 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Feb 22, 2024Updated 2 years ago
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆38Aug 29, 2025Updated 8 months ago
- Accurate, large-scale, and extensible simulator for LLM inference Systems☆595Jul 25, 2025Updated 9 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆47Jun 27, 2024Updated last year
- A resilient distributed training framework☆99Apr 11, 2024Updated 2 years ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Feb 23, 2024Updated 2 years ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆12Mar 7, 2024Updated 2 years ago
- Disaggregated serving system for Large Language Models (LLMs).☆804Apr 6, 2025Updated last year
- This repository is established to store personal notes and annotated papers during daily research.☆190Apr 13, 2026Updated 2 weeks ago
- ☆27Aug 31, 2023Updated 2 years ago
- Galvatron is an automatic distributed training system designed for Transformer models, including Large Language Models (LLMs). If you hav…☆24Oct 22, 2025Updated 6 months ago
- ☆132Nov 11, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆24Mar 1, 2024Updated 2 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆56May 10, 2024Updated last year
- Repository for the COLM 2025 paper SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths☆18Jul 10, 2025Updated 9 months ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆55Dec 11, 2022Updated 3 years ago
- ☆24Aug 15, 2023Updated 2 years ago
- A low-latency & high-throughput serving engine for LLMs☆496Jan 8, 2026Updated 3 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆184Jul 10, 2024Updated last year
- Host CIFAR-10.2 Data Set☆13Sep 22, 2021Updated 4 years ago
- APEX+ is an LLM Serving Simulator☆45Jun 16, 2025Updated 10 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆79Nov 4, 2024Updated last year
- ☆34Apr 8, 2025Updated last year
- ☆17May 10, 2024Updated last year
- ☆88Oct 17, 2025Updated 6 months ago
- DuoDecoding: Hardware-aware Heterogeneous Speculative Decoding with Dynamic Multi-Sequence Drafting☆18Mar 4, 2025Updated last year
- ☆19May 4, 2023Updated 2 years ago
- Proof of concept, using Sysdig metrics as the decision variable for a Kubernetes scheduler☆14Nov 3, 2017Updated 8 years ago
- A throughput-oriented high-performance serving framework for LLMs☆954Mar 29, 2026Updated last month
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆92May 23, 2023Updated 2 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆94Jul 14, 2023Updated 2 years ago
- ☆26Mar 14, 2024Updated 2 years ago
- Efficient and easy multi-instance LLM serving☆547Mar 12, 2026Updated last month
- ☆13Feb 22, 2023Updated 3 years ago
- ☆13Jan 28, 2026Updated 3 months ago
- ☆25Mar 15, 2023Updated 3 years ago
- ☆11Dec 18, 2020Updated 5 years ago