eniac / paellaView external linksLinks
Paella: Low-latency Model Serving with Virtualized GPU Scheduling
☆68May 1, 2024Updated last year
Alternatives and similar repositories for paella
Users that are interested in paella are comparing it to the libraries listed below
Sorting:
- Compiler for Dynamic Neural Networks☆45Nov 13, 2023Updated 2 years ago
- Optimizing data-intensive systems in disaggregated data centers☆13Jun 13, 2022Updated 3 years ago
- OSDI 2023 Welder, deeplearning compiler☆32Nov 24, 2023Updated 2 years ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆58Aug 21, 2024Updated last year
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆34Feb 10, 2025Updated last year
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆43May 29, 2022Updated 3 years ago
- Scaling Up Memory Disaggregated Applications with SMART☆34Apr 23, 2024Updated last year
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆93Jul 14, 2023Updated 2 years ago
- ☆53Dec 26, 2024Updated last year
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆104Dec 24, 2022Updated 3 years ago
- GLake: optimizing GPU memory management and IO transmission.☆497Mar 24, 2025Updated 10 months ago
- PyTorch compilation tutorial covering TorchScript, torch.fx, and Slapo☆17Mar 13, 2023Updated 2 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127May 9, 2022Updated 3 years ago
- An interference-aware scheduler for fine-grained GPU sharing☆159Nov 26, 2025Updated 2 months ago
- Fine-grained GPU sharing primitives☆148Jul 28, 2025Updated 6 months ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,006Sep 19, 2024Updated last year
- ☆38Jun 27, 2025Updated 7 months ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆25May 12, 2025Updated 9 months ago
- play gemm with tvm☆91Jul 22, 2023Updated 2 years ago
- libsmctrl论文的复现,添加了python端接口,可以在python端灵活调用接口来分配计算资源☆12May 21, 2024Updated last year
- https://nnsmith-asplos.rtfd.io Artifact of "NNSmith: Generating Diverse and Valid Test Cases for Deep Learning Compilers" ASPLOS'23☆11Mar 29, 2023Updated 2 years ago
- A resilient distributed training framework☆96Apr 11, 2024Updated last year
- DietCode Code Release☆65Jul 21, 2022Updated 3 years ago
- Splits single Nvidia GPU into multiple partitions with complete compute and memory isolation (wrt to performace) between the partitions☆165Apr 21, 2019Updated 6 years ago
- Optimize tensor program fast with Felix, a gradient descent autotuner.☆30Apr 27, 2024Updated last year
- ☆10Apr 29, 2023Updated 2 years ago
- ☆11Mar 9, 2022Updated 3 years ago
- Source code for Speedlight, a system for Synchronized Network Snapshots☆10Aug 21, 2020Updated 5 years ago
- NCCL Profiling Kit☆152Jul 1, 2024Updated last year
- 分层解耦的深度学习推理引擎☆79Feb 17, 2025Updated 11 months ago
- ☆29Apr 4, 2024Updated last year
- Model-less Inference Serving☆94Nov 4, 2023Updated 2 years ago
- An experimental parallel training platform☆56Mar 25, 2024Updated last year
- Benchmark workloads of Boki☆11Sep 8, 2021Updated 4 years ago
- Code for reproducing experiments performed for Accoridon☆13Jun 11, 2021Updated 4 years ago
- A Sparse-tensor Communication Framework for Distributed Deep Learning☆13Nov 1, 2021Updated 4 years ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆55Dec 11, 2022Updated 3 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆55May 10, 2024Updated last year
- Hooked CUDA-related dynamic libraries by using automated code generation tools.☆172Dec 12, 2023Updated 2 years ago