msr-fiddle / CoorDLLinks
☆24Updated 2 years ago
Alternatives and similar repositories for CoorDL
Users that are interested in CoorDL are comparing it to the libraries listed below
Sorting:
- ☆38Updated 4 years ago
- ☆56Updated 4 years ago
- Fine-grained GPU sharing primitives☆147Updated 3 months ago
- An I/O benchmark for deep Learning applications☆94Updated last week
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆35Updated 2 years ago
- ☆44Updated 4 years ago
- Model-less Inference Serving☆91Updated 2 years ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆53Updated 2 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆126Updated 3 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆55Updated last year
- ☆24Updated 3 years ago
- ☆51Updated 2 years ago
- Tiresias is a GPU cluster manager for distributed deep learning training.☆163Updated 5 years ago
- ☆15Updated 3 years ago
- ☆53Updated 10 months ago
- MLPerf® Storage Benchmark Suite☆167Updated last week
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆131Updated last year
- ML Input Data Processing as a Service. This repository contains the source code for Cachew (built on top of TensorFlow).☆39Updated last year
- GeminiFS: A Companion File System for GPUs☆58Updated 8 months ago
- rFaaS: a high-performance FaaS platform with RDMA acceleration for low-latency invocations.☆57Updated 4 months ago
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆45Updated 2 years ago
- ☆196Updated 6 years ago
- Deduplication over dis-aggregated memory for Serverless Computing☆14Updated 3 years ago
- An interference-aware scheduler for fine-grained GPU sharing☆150Updated 9 months ago
- ☆31Updated last year
- Accelerating Deep Learning Training Through Transparent Storage Tiering (CCGrid'22)☆19Updated 2 years ago
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆57Updated 3 months ago
- ☆17Updated 2 years ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆43Updated 3 years ago
- ☆202Updated 2 months ago