NVIDIA / nvshmemLinks
NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process communication and coordination overheads by allowing programmers to perform one-sided communication from within CUDA kernels and on CUDA streams.
☆443Updated last week
Alternatives and similar repositories for nvshmem
Users that are interested in nvshmem are comparing it to the libraries listed below
Sorting:
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆148Updated this week
- torchcomms: a modern PyTorch communications API☆319Updated this week
- Perplexity GPU Kernels☆552Updated 2 months ago
- Helpful kernel tutorials and examples for tile-based GPU programming☆554Updated this week
- A lightweight design for computation-communication overlap.☆207Updated 2 weeks ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆451Updated this week
- Github mirror of trition-lang/triton repo.☆119Updated this week
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆152Updated 3 months ago
- Open ABI and FFI for Machine Learning Systems☆293Updated this week
- ☆338Updated last week
- ☆154Updated last year
- Low overhead tracing library and trace visualizer for pipelined CUDA kernels☆129Updated last month
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆182Updated this week
- Thunder Research Group's Collective Communication Library☆46Updated 6 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆308Updated this week
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆77Updated 3 weeks ago
- An experimental CPU backend for Triton☆168Updated 2 months ago
- Perplexity open source garden for inference technology☆324Updated 2 weeks ago
- This repository contains companion software for the Colfax Research paper "Categorical Foundations for CuTe Layouts".☆83Updated 3 months ago
- Allow torch tensor memory to be released and resumed later☆199Updated last month
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆191Updated 11 months ago
- Fastest kernels written from scratch☆517Updated 3 months ago
- ☆255Updated last year
- NCCL Profiling Kit☆150Updated last year
- Microsoft Collective Communication Library☆66Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆454Updated 7 months ago
- Accelerating MoE with IO and Tile-aware Optimizations☆522Updated last week
- ☆270Updated last week
- ☆73Updated last year
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆142Updated 8 months ago