ashvardanian / PyBindToGPUsLinks
Parallel Computing starter project to build GPU & CPU kernels in CUDA & C++ and call them from Python without a single line of CMake using PyBind11
☆30Updated last month
Alternatives and similar repositories for PyBindToGPUs
Users that are interested in PyBindToGPUs are comparing it to the libraries listed below
Sorting:
- LLM training in simple, raw C/CUDA☆108Updated last year
- High-Performance SGEMM on CUDA devices☆112Updated 10 months ago
- extensible collectives library in triton☆91Updated 8 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆61Updated last week
- Effective transpose on Hopper GPU☆27Updated 2 months ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆47Updated 3 months ago
- ☆21Updated 8 months ago
- Parallel framework for training and fine-tuning deep neural networks☆70Updated 3 weeks ago
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.☆312Updated this week
- Fast and Furious AMD Kernels☆298Updated this week
- Hand-Rolled GPU communications library☆70Updated last week
- Quantized LLM training in pure CUDA/C++.☆218Updated this week
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆151Updated 2 years ago
- Fast and vectorizable algorithms for searching in a vector of sorted floating point numbers☆153Updated 11 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆196Updated 5 months ago
- Fast low-bit matmul kernels in Triton☆401Updated last week
- torchcomms: a modern PyTorch communications API☆295Updated this week
- ☆71Updated 8 months ago
- 🏙 Interactive performance profiling and debugging tool for PyTorch neural networks.☆64Updated 10 months ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆454Updated 2 weeks ago
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆69Updated 7 months ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆160Updated 2 weeks ago
- Triton-based Symmetric Memory operators and examples☆63Updated last month
- A FlashAttention implementation for JAX with support for efficient document mask computation and context parallelism.☆149Updated 2 weeks ago
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆116Updated last week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆244Updated 6 months ago
- ☆70Updated 2 weeks ago
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆57Updated last week
- Learning about CUDA by writing PTX code.☆147Updated last year
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆393Updated 2 weeks ago