quiver-team / quiver-feature
High performance RDMA-based distributed feature collection component for training GNN model on EXTREMELY large graph
☆52Updated 2 years ago
Alternatives and similar repositories for quiver-feature:
Users that are interested in quiver-feature are comparing it to the libraries listed below
- Artifact of ASPLOS'23 paper entitled: GRACE: A Scalable Graph-Based Approach to Accelerating Recommendation Model Inference☆18Updated 2 years ago
- ☆23Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆52Updated 7 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated 11 months ago
- Vector search with bounded performance.☆34Updated last year
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆25Updated 2 months ago
- FGNN's artifact evaluation (EuroSys 2022)☆17Updated 2 years ago
- My paper/code reading notes in Chinese☆46Updated 10 months ago
- A Factored System for Sample-based GNN Training over GPUs☆42Updated last year
- PetPS: Supporting Huge Embedding Models with Tiered Memory☆30Updated 10 months ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆126Updated 2 years ago
- SOTA Learning-augmented Systems☆36Updated 2 years ago
- Dorylus: Affordable, Scalable, and Accurate GNN Training☆77Updated 3 years ago
- ☆16Updated 2 years ago
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆23Updated last year
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆94Updated 2 years ago
- Graph Sampling using GPU☆51Updated 3 years ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆42Updated 2 years ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆31Updated 2 years ago
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆40Updated last year
- ☆23Updated 2 months ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆61Updated 10 months ago
- ☆14Updated 2 years ago
- GVProf: A Value Profiler for GPU-based Clusters☆49Updated last year
- An Optimizing Compiler for Recommendation Model Inference☆23Updated last year
- Herald: Accelerating Neural Recommendation Training with Embedding Scheduling (NSDI 2024)☆22Updated 11 months ago
- Thunder Research Group's Collective Communication Library☆36Updated 11 months ago
- ☆53Updated 4 years ago
- Efficient Interactive LLM Serving with Proxy Model-based Sequence Length Prediction | A tiny BERT model can tell you the verbosity of an …☆33Updated 10 months ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆51Updated 11 months ago