REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU scheduling.
☆107Dec 24, 2022Updated 3 years ago
Alternatives and similar repositories for reef
Users that are interested in reef are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆59Aug 21, 2024Updated last year
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆43May 29, 2022Updated 3 years ago
- ☆53Dec 26, 2024Updated last year
- An interference-aware scheduler for fine-grained GPU sharing☆162Nov 26, 2025Updated 4 months ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127May 9, 2022Updated 3 years ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ☆38Jun 27, 2025Updated 9 months ago
- Artifacts for our NSDI'23 paper TGS☆97Jun 10, 2024Updated last year
- Model-less Inference Serving☆94Nov 4, 2023Updated 2 years ago
- Ths is a fast RDMA abstraction layer that works both in the kernel and user-space.☆59Nov 12, 2024Updated last year
- Compiler for Dynamic Neural Networks☆45Nov 13, 2023Updated 2 years ago
- A scheduling framework for multitasking over diverse XPUs, including GPUs, NPUs, ASICs, and FPGAs☆167Jan 13, 2026Updated 3 months ago
- ☆15Aug 15, 2024Updated last year
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆24Mar 1, 2024Updated 2 years ago
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆56Aug 6, 2025Updated 8 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆71May 1, 2024Updated last year
- ☆52Dec 13, 2022Updated 3 years ago