NVIDIA / nvshmemLinks
NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process communication and coordination overheads by allowing programmers to perform one-sided communication from within CUDA kernels and on CUDA streams.
☆351Updated last week
Alternatives and similar repositories for nvshmem
Users that are interested in nvshmem are comparing it to the libraries listed below
Sorting:
- A lightweight design for computation-communication overlap.☆181Updated last week
- Perplexity GPU Kernels☆497Updated last month
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆91Updated this week
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆138Updated last month
- ☆141Updated 9 months ago
- ☆92Updated 11 months ago
- DeeperGEMM: crazy optimized version☆72Updated 5 months ago
- ☆307Updated 3 weeks ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆186Updated 8 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆121Updated 5 months ago
- extensible collectives library in triton☆89Updated 6 months ago
- Allow torch tensor memory to be released and resumed later☆150Updated this week
- ☆240Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆100Updated 3 months ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆160Updated this week
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆80Updated 11 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆258Updated last week
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆389Updated this week
- Github mirror of trition-lang/triton repo.☆86Updated this week
- Fastest kernels written from scratch☆374Updated last month
- This repository contains companion software for the Colfax Research paper "Categorical Foundations for CuTe Layouts".☆69Updated 3 weeks ago
- ☆65Updated 5 months ago
- ☆150Updated 5 months ago
- kernels, of the mega variety☆586Updated 3 weeks ago
- Thunder Research Group's Collective Communication Library☆42Updated 3 months ago
- How to ensure correctness and ship LLM generated kernels in PyTorch☆66Updated this week
- CUTLASS and CuTe Examples☆91Updated this week
- ☆45Updated 5 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆425Updated this week
- A Quirky Assortment of CuTe Kernels☆627Updated last week