AIS-SNU / GraNNDis_ArtifactLinks
[PACT'24] GraNNDis. A fast and unified distributed graph neural network (GNN) training framework for both full-batch (full-graph) and mini-batch training. Provides unification of full-/mini-batch training using a novel data/communication structure.
☆10Updated last year
Alternatives and similar repositories for GraNNDis_Artifact
Users that are interested in GraNNDis_Artifact are comparing it to the libraries listed below
Sorting:
- ☆27Updated last year
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆50Updated 5 months ago
- ☆11Updated 7 months ago
- ☆58Updated last year
- UPMEM LLM Framework allows profiling PyTorch layers and functions and simulate those layers/functions with a given hardware profile.☆37Updated 4 months ago
- Artifact for USENIX ATC'23: TC-GNN: Bridging Sparse GNN Computation and Dense Tensor Cores on GPUs.☆52Updated 2 years ago
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆34Updated last year
- A Cycle-level simulator for M2NDP☆32Updated 4 months ago
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆41Updated last year
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆24Updated last year
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆55Updated last year
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆27Updated last year
- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing☆105Updated last year
- Artifact for paper "PIM is All You Need: A CXL-Enabled GPU-Free System for LLM Inference", ASPLOS 2025☆114Updated 7 months ago
- Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching☆41Updated last year
- ☆15Updated last year
- ☆161Updated 10 months ago
- ☆42Updated last year
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆90Updated 3 years ago
- Artifact for PPoPP22 QGTC: Accelerating Quantized GNN via GPU Tensor Core.☆30Updated 3 years ago
- ☆115Updated last year
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆164Updated 5 months ago
- [DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation☆11Updated 2 years ago
- ☆214Updated 2 months ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆49Updated last year
- ☆10Updated 9 months ago
- ☆79Updated 6 months ago
- [HPCA 2022] GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design☆38Updated 3 years ago
- ☆24Updated 2 months ago
- This repo is to collect the state-of-the-art GNN hardware acceleration paper☆54Updated 4 years ago