SNU-ARC / GinexView external linksLinks
Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching
☆41Jul 10, 2024Updated last year
Alternatives and similar repositories for Ginex
Users that are interested in Ginex are comparing it to the libraries listed below
Sorting:
- SoCC'20 and TPDS'21: Scaling GNN Training on Large Graphs via Computation-aware Caching and Partitioning.☆51May 23, 2023Updated 2 years ago
- [ICLR 2022] "PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication" by Cheng Wan, Y…☆33Mar 15, 2023Updated 2 years ago
- ☆47Sep 5, 2022Updated 3 years ago
- ☆42Jun 13, 2025Updated 8 months ago
- Accelerating Deep Learning Training Through Transparent Storage Tiering (CCGrid'22)☆19Dec 13, 2022Updated 3 years ago
- RPCNIC: A High-Performance and Reconfigurable PCIe-attached RPC Accelerator [HPCA2025]☆13Dec 9, 2024Updated last year
- ☆22Dec 4, 2020Updated 5 years ago
- A list of awesome GNN systems.☆336Updated this week
- Large scale graph learning on a single machine.☆167Feb 25, 2025Updated 11 months ago
- Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture (accepted by PVLDB)☆44Jul 1, 2023Updated 2 years ago
- Open source code of BGL NSDI 2023☆18Jul 24, 2023Updated 2 years ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆36Mar 1, 2023Updated 2 years ago
- [MLSys 2022] "BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node …☆56Oct 6, 2023Updated 2 years ago
- Artifact for PPoPP20 "Understanding and Bridging the Gaps in Current GNN Performance Optimizations"☆40Nov 16, 2021Updated 4 years ago
- ☆40Nov 28, 2022Updated 3 years ago
- PyTorch Library for Low-Latency, High-Throughput Graph Learning on GPUs.☆302Aug 17, 2023Updated 2 years ago
- Artifacts of VLDB'22 paper "COMET: A Novel Memory-Efficient Deep Learning TrainingFramework by Using Error-Bounded Lossy Compression"☆10Aug 2, 2022Updated 3 years ago
- FPGA-based HyperLogLog Accelerator☆12Jul 13, 2020Updated 5 years ago
- Efficient-Tensor-Management-on-HM-for-Deep-Learning☆10Nov 15, 2021Updated 4 years ago
- ☆11Apr 3, 2023Updated 2 years ago
- [PACT'24] GraNNDis. A fast and unified distributed graph neural network (GNN) training framework for both full-batch (full-graph) and min…☆10Aug 13, 2024Updated last year
- Artifact for OSDI'21 GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs.☆70Mar 2, 2023Updated 2 years ago
- ☆28Aug 14, 2024Updated last year
- A Factored System for Sample-based GNN Training over GPUs☆46Jul 26, 2023Updated 2 years ago
- ☆10Apr 29, 2023Updated 2 years ago
- Graph accelerator on FPGAs and ASICs☆11Aug 16, 2018Updated 7 years ago
- A Vector Caching Scheme for Streaming FPGA SpMV Accelerators☆10Sep 7, 2015Updated 10 years ago
- ☆24Jun 21, 2023Updated 2 years ago
- A reading list for deep graph learning acceleration.☆254Jul 26, 2025Updated 6 months ago
- CAM: Asynchronous GPU-Initiated, CPU-Managed SSD Management for Batching Storage Access [ICDE'25]☆18Mar 3, 2025Updated 11 months ago
- ☆12Feb 16, 2023Updated 2 years ago
- ☆19Jun 1, 2025Updated 8 months ago
- C++17 implementation of einops for libtorch - clear and reliable tensor manipulations with einstein-like notation☆11Oct 16, 2023Updated 2 years ago
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆52Jul 21, 2025Updated 6 months ago
- ☆28Nov 29, 2024Updated last year
- Accepted paper of SIGMOD 2023, DUCATI: A Dual-Cache Training System for Graph Neural Networks on Giant Graphs with the GPU☆15Dec 15, 2023Updated 2 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆27Dec 10, 2022Updated 3 years ago
- ☆13Mar 26, 2024Updated last year
- This repo is to collect the state-of-the-art GNN hardware acceleration paper☆54Jun 8, 2021Updated 4 years ago