SamsungLabs / FastFlow
FastFlow is a system that automatically detects CPU bottlenecks in deep learning training pipelines and resolves the bottlenecks with data pipeline offloading to remote resources .
☆27Updated last year
Alternatives and similar repositories for FastFlow:
Users that are interested in FastFlow are comparing it to the libraries listed below
- Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access (ACM EuroSys '23)☆57Updated 11 months ago
- ☆24Updated last year
- ☆47Updated 2 months ago
- An interference-aware scheduler for fine-grained GPU sharing☆129Updated last month
- ☆35Updated 4 years ago
- ☆53Updated 4 years ago
- Multi-Instance-GPU profiling tool☆57Updated last year
- ☆23Updated 2 years ago
- ☆37Updated 3 years ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆31Updated 2 years ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆49Updated 2 years ago
- ☆16Updated 2 years ago
- SOTA Learning-augmented Systems☆35Updated 2 years ago
- ☆23Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆53Updated 7 months ago
- ☆49Updated 2 years ago
- An experimental parallel training platform☆54Updated 11 months ago
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆153Updated 5 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated 10 months ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆10Updated last year
- ☆19Updated 2 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆52Updated 10 months ago
- Model-less Inference Serving☆85Updated last year
- ☆18Updated 9 months ago
- LLM serving cluster simulator☆94Updated 10 months ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆82Updated last year
- ☆14Updated 2 years ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆60Updated 9 months ago
- LLM Inference analyzer for different hardware platforms☆54Updated 2 weeks ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆42Updated 2 years ago