lipracer / cuda-rt-hookLinks
☆42Updated 2 months ago
Alternatives and similar repositories for cuda-rt-hook
Users that are interested in cuda-rt-hook are comparing it to the libraries listed below
Sorting:
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆62Updated last year
- A scheduling framework for multitasking over diverse XPUs, including GPUs, NPUs, ASICs, and FPGAs☆113Updated 2 weeks ago
- ☆85Updated 6 months ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆43Updated 3 years ago
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆14Updated 10 months ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆62Updated last year
- ☆192Updated 2 months ago
- ☆57Updated 4 months ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆100Updated 2 years ago
- ☆51Updated 4 months ago
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆30Updated 4 months ago
- PerFlow-AI is a programmable performance analysis, modeling, prediction tool for AI system.☆24Updated this week
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆55Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆54Updated last year
- My Paper Reading Lists and Notes.☆20Updated 2 weeks ago
- ☆23Updated last year
- ☆24Updated 3 years ago
- ☆83Updated 2 years ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆132Updated 3 weeks ago
- GVProf: A Value Profiler for GPU-based Clusters☆52Updated last year
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆31Updated 8 months ago
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆63Updated last month
- Horizontal Fusion☆24Updated 3 years ago
- Compiler for Dynamic Neural Networks☆46Updated last year
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆111Updated 4 months ago
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆41Updated last week
- ☆77Updated 3 years ago
- Artifact of ASPLOS'23 paper entitled: GRACE: A Scalable Graph-Based Approach to Accelerating Recommendation Model Inference☆19Updated 2 years ago
- DeepSeek-V3/R1 inference performance simulator☆169Updated 6 months ago
- matmul using AMX instructions☆19Updated last year