REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU scheduling.
☆104Dec 24, 2022Updated 3 years ago
Alternatives and similar repositories for reef
Users that are interested in reef are comparing it to the libraries listed below
Sorting:
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆58Aug 21, 2024Updated last year
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆43May 29, 2022Updated 3 years ago
- ☆53Dec 26, 2024Updated last year
- An interference-aware scheduler for fine-grained GPU sharing☆159Nov 26, 2025Updated 3 months ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127May 9, 2022Updated 3 years ago
- Model-less Inference Serving☆94Nov 4, 2023Updated 2 years ago
- ☆38Jun 27, 2025Updated 8 months ago
- Artifacts for our NSDI'23 paper TGS☆96Jun 10, 2024Updated last year
- Compiler for Dynamic Neural Networks☆45Nov 13, 2023Updated 2 years ago
- A scheduling framework for multitasking over diverse XPUs, including GPUs, NPUs, ASICs, and FPGAs☆158Jan 13, 2026Updated last month
- ☆15Aug 15, 2024Updated last year
- ☆52Dec 13, 2022Updated 3 years ago
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆56Aug 6, 2025Updated 6 months ago
- Nu is a new datacenter system that enables developers to build fungible applications that can use datacenter resources wherever they are.☆41May 14, 2024Updated last year
- Ths is a fast RDMA abstraction layer that works both in the kernel and user-space.☆59Nov 12, 2024Updated last year
- An OS kernel module for fast **remote** fork using advanced datacenter networking (RDMA).☆71Feb 15, 2025Updated last year
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆24Mar 1, 2024Updated 2 years ago
- ☆84Dec 2, 2022Updated 3 years ago
- Project Mitosis Introduction☆19Nov 13, 2022Updated 3 years ago
- Deduplication over dis-aggregated memory for Serverless Computing☆14Mar 21, 2022Updated 3 years ago
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆18Jan 15, 2025Updated last year
- Official implementation for the paper Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapp…☆14Nov 17, 2025Updated 3 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆68May 1, 2024Updated last year
- ☆23Oct 31, 2023Updated 2 years ago
- FTPipe and related pipeline model parallelism research.☆44May 16, 2023Updated 2 years ago
- This is a list of awesome edgeAI inference related papers.☆99Dec 21, 2023Updated 2 years ago
- ☆17May 10, 2024Updated last year
- ☆38Jan 15, 2021Updated 5 years ago
- ☆29Oct 27, 2023Updated 2 years ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- GPU-scheduler-for-deep-learning☆210Nov 5, 2020Updated 5 years ago
- Matrix multiplication on GPUs for matrices stored on a CPU. Similar to cublasXt, but ported to both NVIDIA and AMD GPUs.☆32Apr 2, 2025Updated 11 months ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,006Sep 19, 2024Updated last year
- ☆14Jan 12, 2022Updated 4 years ago
- Virtual Memory Abstraction for Serverless Architectures☆49Mar 18, 2022Updated 3 years ago
- A low-latency & high-throughput serving engine for LLMs☆480Jan 8, 2026Updated last month
- Disaggregated serving system for Large Language Models (LLMs).☆777Apr 6, 2025Updated 10 months ago
- A tool for examining GPU scheduling behavior.☆95Aug 17, 2024Updated last year
- A decentralized scalar timestamp scheme☆16Apr 12, 2021Updated 4 years ago