NVIDIA / hoti-2025-gpu-comms-tutorialLinks
Tutorial Exercises and Code for GPU Communications Tutorial at HOT Interconnects 2025
☆26Updated 3 months ago
Alternatives and similar repositories for hoti-2025-gpu-comms-tutorial
Users that are interested in hoti-2025-gpu-comms-tutorial are comparing it to the libraries listed below
Sorting:
- ☆93Updated 9 months ago
- ☆87Updated 8 months ago
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆66Updated last month
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆158Updated 4 months ago
- ☆76Updated last week
- A lightweight design for computation-communication overlap.☆213Updated last week
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆79Updated 7 months ago
- ☆130Updated last year
- Tile-based language built for AI computation across all scales☆117Updated 2 weeks ago
- High performance Transformer implementation in C++.☆148Updated last year
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆53Updated 2 weeks ago
- ☆79Updated 3 years ago
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆83Updated last month
- Thunder Research Group's Collective Communication Library☆47Updated 6 months ago
- DeepSeek-V3/R1 inference performance simulator☆177Updated 10 months ago
- Scalable long-context LLM decoding that leverages sparsity—by treating the KV cache as a vector storage system.☆119Updated 3 weeks ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆174Updated last year
- A framework for generating realistic LLM serving workloads☆98Updated 3 months ago
- ☆32Updated last year
- Artifact from "Hardware Compute Partitioning on NVIDIA GPUs". THIS IS A FORK OF BAKITAS REPO. I AM NOT ONE OF THE AUTHORS OF THE PAPER.☆53Updated 2 months ago
- Low overhead tracing library and trace visualizer for pipelined CUDA kernels☆129Updated 2 months ago
- PerFlow-AI is a programmable performance analysis, modeling, prediction tool for AI system.☆28Updated 3 weeks ago
- ☆83Updated 3 months ago
- Accepted to MLSys 2026☆70Updated this week
- GPU TopK Benchmark☆18Updated last year
- ☆233Updated last month
- ☆211Updated 2 months ago
- An interference-aware scheduler for fine-grained GPU sharing☆159Updated 2 months ago
- [EuroSys'25] Mist: Efficient Distributed Training of Large Language Models via Memory-Parallelism Co-Optimization☆21Updated 5 months ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆76Updated 3 months ago