Linestro / GRACELinks
Artifact of ASPLOS'23 paper entitled: GRACE: A Scalable Graph-Based Approach to Accelerating Recommendation Model Inference
☆19Updated 2 years ago
Alternatives and similar repositories for GRACE
Users that are interested in GRACE are comparing it to the libraries listed below
Sorting:
- ☆31Updated last year
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆43Updated 3 years ago
- ☆24Updated 3 years ago
- ☆30Updated 5 years ago
- ☆18Updated 4 years ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆55Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆54Updated last year
- GVProf: A Value Profiler for GPU-based Clusters☆52Updated last year
- This serves as a repository for reproducibility of the SC21 paper "In-Depth Analyses of Unified Virtual Memory System for GPU Accelerated…☆36Updated 2 years ago
- ☆40Updated 2 years ago
- ☆36Updated last year
- ☆83Updated 2 years ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆31Updated 8 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆62Updated last year
- ☆23Updated 2 years ago
- ☆31Updated last year
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆101Updated 2 years ago
- ☆61Updated 5 months ago
- ngAP's artifact for ASPLOS'24☆24Updated 3 months ago
- ☆14Updated 6 years ago
- High performance RDMA-based distributed feature collection component for training GNN model on EXTREMELY large graph☆55Updated 3 years ago
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆14Updated 11 months ago
- Artifacts of EVT ASPLOS'24☆27Updated last year
- WaferLLM: Large Language Model Inference at Wafer Scale☆63Updated last week
- ☆42Updated 3 months ago
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆57Updated 2 months ago
- Horizontal Fusion☆24Updated 3 years ago
- ☆50Updated 6 years ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆35Updated 2 years ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆89Updated 2 years ago