dywsjtu / apparateLinks
Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]
☆24Updated 6 months ago
Alternatives and similar repositories for apparate
Users that are interested in apparate are comparing it to the libraries listed below
Sorting:
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- ☆9Updated 10 months ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆50Updated 2 years ago
- Dynamic resources changes for multi-dimensional parallelism training☆25Updated 6 months ago
- ☆25Updated last year
- SOTA Learning-augmented Systems☆36Updated 3 years ago
- Efficient Interactive LLM Serving with Proxy Model-based Sequence Length Prediction | A tiny BERT model can tell you the verbosity of an …☆35Updated last year
- ☆16Updated last year
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆46Updated 6 months ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆51Updated last year
- Cupcake: A Compression Scheduler for Scalable Communication-Efficient Distributed Training (MLSys '23)☆9Updated last year
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆43Updated 2 years ago
- Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Honorable Mention]☆10Updated 2 months ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Updated last year
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆48Updated 6 months ago
- A resilient distributed training framework☆95Updated last year
- ☆21Updated last year
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆22Updated 2 weeks ago
- Stateful LLM Serving☆70Updated 2 months ago
- ☆14Updated 3 years ago
- Deferred Continuous Batching in Resource-Efficient Large Language Model Serving (EuroMLSys 2024)☆16Updated last year
- ☆62Updated 11 months ago
- An Attention Superoptimizer☆21Updated 4 months ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆25Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆52Updated 9 months ago
- Official Repo for "LLM-PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization"☆32Updated last year
- Surrogate-based Hyperparameter Tuning System☆28Updated last year
- ☆12Updated last month
- Vector search with bounded performance.☆35Updated last year
- ☆53Updated 4 years ago