aravic / generalizable-device-placementLinks
Reference code for https://arxiv.org/abs/1906.08879
☆18Updated 6 years ago
Alternatives and similar repositories for generalizable-device-placement
Users that are interested in generalizable-device-placement are comparing it to the libraries listed below
Sorting:
- Metis: Learning to Schedule Long-Running Applications in Shared Container Clusters with at Scale☆19Updated 5 years ago
- ☆41Updated 5 years ago
- ☆226Updated 2 years ago
- HeliosArtifact☆22Updated 3 years ago
- Learning Scheduling Algorithms for Data Processing Clusters☆320Updated 4 years ago
- Code for "Solving Large-Scale Granular Resource Allocation Problems Efficiently with POP", which appeared at SOSP 2021☆28Updated 4 years ago
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆136Updated last year
- ☆47Updated last year
- [NSDI 2023] TopoOpt: Optimizing the Network Topology for Distributed DNN Training☆36Updated last year
- Helios Traces from SenseTime☆61Updated 3 years ago
- ☆94Updated 2 years ago
- ☆44Updated last year
- https://arxiv.org/abs/1706.04972☆45Updated 7 years ago
- ☆23Updated 4 years ago
- ☆24Updated 4 years ago
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆46Updated 3 years ago
- This repository contains code for the paper: Bergsma S., Zeyl T., Senderovich A., and Beck J. C., "Generating Complex, Realistic Cloud Wo…☆43Updated 4 years ago
- A Generic Resource-Aware Hyperparameter Tuning Execution Engine☆15Updated 4 years ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆92Updated 2 years ago
- Primo: Practical Learning-Augmented Systems with Interpretable Models☆19Updated 2 years ago
- ☆38Updated 6 months ago
- Surrogate-based Hyperparameter Tuning System☆27Updated 2 years ago
- A Deep Learning Cluster Scheduler☆37Updated 5 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Updated 3 years ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆55Updated 3 years ago
- GPU-accelerated LLM Training Simulator☆47Updated 6 months ago
- Artifacts for our SIGCOMM'22 paper Muri☆43Updated 2 years ago
- Boost hardware utilization for ML training workloads via Inter-model Horizontal Fusion☆32Updated last year
- ☆37Updated 2 years ago
- RLScheduler: An AutomatedHPC Batch Job Scheduler Using Reinforcement Learning [SC'20]☆64Updated 2 years ago