REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU scheduling.
☆108Dec 24, 2022Updated 3 years ago
Alternatives and similar repositories for reef
Users that are interested in reef are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆59Aug 21, 2024Updated last year
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆43May 29, 2022Updated 3 years ago
- ☆53Dec 26, 2024Updated last year
- An interference-aware scheduler for fine-grained GPU sharing☆162Nov 26, 2025Updated 5 months ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127May 9, 2022Updated 3 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- ☆38Jun 27, 2025Updated 10 months ago
- Artifacts for our NSDI'23 paper TGS☆97Jun 10, 2024Updated last year
- Model-less Inference Serving☆94Nov 4, 2023Updated 2 years ago
- Ths is a fast RDMA abstraction layer that works both in the kernel and user-space.☆59Nov 12, 2024Updated last year
- Compiler for Dynamic Neural Networks☆45Nov 13, 2023Updated 2 years ago
- A scheduling framework for multitasking over diverse XPUs, including GPUs, NPUs, ASICs, and FPGAs☆169Jan 13, 2026Updated 3 months ago
- ☆15Aug 15, 2024Updated last year
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆24Mar 1, 2024Updated 2 years ago
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆56Aug 6, 2025Updated 8 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆71May 1, 2024Updated 2 years ago
- ☆53Dec 13, 2022Updated 3 years ago
- ☆84Dec 2, 2022Updated 3 years ago
- ☆23Oct 31, 2023Updated 2 years ago
- This is a list of awesome edgeAI inference related papers.☆98Dec 21, 2023Updated 2 years ago
- Project Mitosis Introduction☆19Nov 13, 2022Updated 3 years ago
- Official implementation for the paper Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapp…☆14Nov 17, 2025Updated 5 months ago
- Multi-Instance-GPU profiling tool☆59Apr 16, 2023Updated 3 years ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Feb 23, 2024Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- A graph-based distributed in-memory store that leverages efficient graph exploration to provide highly concurrent and low-latency queries…☆192Jan 4, 2026Updated 4 months ago
- An OS kernel module for fast **remote** fork using advanced datacenter networking (RDMA).☆72Feb 15, 2025Updated last year
- GPU-scheduler-for-deep-learning☆209Nov 5, 2020Updated 5 years ago
- FTPipe and related pipeline model parallelism research.☆44May 16, 2023Updated 2 years ago
- A tool for examining GPU scheduling behavior.☆96Aug 17, 2024Updated last year
- Deduplication over dis-aggregated memory for Serverless Computing☆14Mar 21, 2022Updated 4 years ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,000Sep 19, 2024Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆807Apr 6, 2025Updated last year
- Fast In-memory Transaction Processing using RDMA and HTM☆59Dec 20, 2015Updated 10 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- A low-latency & high-throughput serving engine for LLMs☆496Jan 8, 2026Updated 3 months ago
- An efficient GPU resource sharing system with fine-grained control for Linux platforms.☆89Mar 25, 2024Updated 2 years ago
- libsmctrl论文的复现,添加了python端接口,可以在python端灵活调用接口来分配计算资源☆12May 21, 2024Updated last year
- Nu is a new datacenter system that enables developers to build fungible applications that can use datacenter resources wherever they are.☆41May 14, 2024Updated last year
- ☆200Aug 31, 2019Updated 6 years ago
- ☆38Jan 15, 2021Updated 5 years ago
- ☆30Oct 27, 2023Updated 2 years ago