SNU-ARC / flashneuron
☆31Updated last year
Related projects ⓘ
Alternatives and complementary repositories for flashneuron
- ☆23Updated 2 years ago
- ☆24Updated last year
- ☆33Updated last year
- PyTorch-UVM on super-large language models.☆14Updated 3 years ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆46Updated 5 months ago
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆36Updated 8 months ago
- Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching☆35Updated 4 months ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆58Updated last month
- Thinking is hard - automate it☆18Updated 2 years ago
- Artifact of ASPLOS'23 paper entitled: GRACE: A Scalable Graph-Based Approach to Accelerating Recommendation Model Inference☆16Updated last year
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆25Updated 9 months ago
- ☆51Updated 3 years ago
- ☆16Updated last year
- ☆46Updated 5 months ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆52Updated 6 months ago
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆36Updated 8 months ago
- ☆25Updated 4 years ago
- The Artifact of NeoMem: Hardware/Software Co-Design for CXL-Native Memory Tiering☆28Updated 3 months ago
- ☆21Updated last year
- ☆31Updated 5 months ago
- Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access (ACM EuroSys '23)☆54Updated 7 months ago
- Distributed Multi-GPU GNN Framework☆36Updated 4 years ago
- ☆60Updated 3 years ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆57Updated 5 months ago
- ☆66Updated 4 years ago
- Stateful LLM Serving☆38Updated 3 months ago
- A source-to-source compiler for optimizing CUDA dynamic parallelism by aggregating launches☆14Updated 5 years ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆81Updated 2 years ago
- GVProf: A Value Profiler for GPU-based Clusters☆48Updated 8 months ago
- ☆73Updated last year