SamsungLabs / FastFlow
FastFlow is a system that automatically detects CPU bottlenecks in deep learning training pipelines and resolves the bottlenecks with data pipeline offloading to remote resources .
☆26Updated 2 years ago
Alternatives and similar repositories for FastFlow:
Users that are interested in FastFlow are comparing it to the libraries listed below
- [ACM EuroSys '23] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆56Updated last year
- ☆24Updated last year
- ☆49Updated 2 years ago
- ☆35Updated 4 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆51Updated 11 months ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆11Updated last year
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆40Updated 5 months ago
- SOTA Learning-augmented Systems☆36Updated 2 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated last year
- ☆48Updated 4 months ago
- ☆37Updated 3 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- ☆23Updated 2 years ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆32Updated 2 years ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆49Updated 2 years ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated last year
- [ATC '24] Metis: Fast automatic distributed training on heterogeneous GPUs (https://www.usenix.org/conference/atc24/presentation/um)☆25Updated 5 months ago
- Compiler for Dynamic Neural Networks☆46Updated last year
- LLM serving cluster simulator☆99Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆52Updated 8 months ago
- ☆79Updated 2 years ago
- Artifacts for our SIGCOMM'22 paper Muri☆41Updated last year
- ☆14Updated 3 years ago
- ☆22Updated last year
- Model-less Inference Serving☆88Updated last year
- ☆23Updated 10 months ago
- ☆53Updated 4 years ago
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆164Updated 6 months ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆42Updated 2 years ago
- An interference-aware scheduler for fine-grained GPU sharing☆133Updated 3 months ago