EMDC-OS / power-aware-tritonLinks
Know Your Enemy To Save Cloud Energy: Energy-Performance Characterization of Machine Learning Serving (HPCA '23)
☆13Updated 3 months ago
Alternatives and similar repositories for power-aware-triton
Users that are interested in power-aware-triton are comparing it to the libraries listed below
Sorting:
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆57Updated last month
- Intel staging area for llvm.org contribution. Home for Intel LLVM-based projects.☆38Updated 3 months ago
- ☆73Updated 3 months ago
- ☆12Updated 5 months ago
- ☆25Updated 2 years ago
- Network Contention-Aware Cluster Scheduling with Reinforcement Learning (IEEE ICPADS'23)☆16Updated 2 months ago
- MISO: Exploiting Multi-Instance GPU Capability on Multi-Tenant GPU Clusters☆20Updated 2 years ago
- Repository for MLCommons Chakra schema and tools☆126Updated this week
- ☆51Updated 8 months ago
- ☆25Updated 2 years ago
- Tensors and Dynamic neural networks in Python with strong GPU acceleration☆15Updated 4 years ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆74Updated 2 years ago
- ☆30Updated 2 weeks ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆140Updated 2 months ago
- LLM serving cluster simulator☆114Updated last year
- Repository for MLCommons Chakra schema and tools☆39Updated last year
- An interference-aware scheduler for fine-grained GPU sharing☆146Updated 7 months ago
- Microsoft Collective Communication Library☆360Updated 2 years ago
- Synthesizer for optimal collective communication algorithms☆117Updated last year
- ☆103Updated 2 years ago
- ☆37Updated 2 months ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆51Updated 2 years ago
- ☆24Updated 3 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated 2 years ago
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆130Updated last year
- ☆52Updated 3 months ago
- Releasing the spot availability traces used in "Can't Be Late" paper.☆23Updated last year
- "JABAS: Joint Adaptive Batching and Automatic Scaling for DNN Training on Heterogeneous GPUs" (EuroSys '25)☆14Updated 5 months ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆13Updated last year
- ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale☆430Updated 3 weeks ago