ashvardanian / PyBindToGPUsLinks
Parallel Computing starter project to build GPU & CPU kernels in CUDA & C++ and call them from Python without a single line of CMake using PyBind11
☆31Updated 3 months ago
Alternatives and similar repositories for PyBindToGPUs
Users that are interested in PyBindToGPUs are comparing it to the libraries listed below
Sorting:
- LLM training in simple, raw C/CUDA☆112Updated last year
- High-Performance FP32 GEMM on CUDA devices☆117Updated last year
- Fast and vectorizable algorithms for searching in a vector of sorted floating point numbers☆153Updated last year
- A list of awesome resources and blogs on topics related to Unum☆45Updated 3 months ago
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆64Updated 2 weeks ago
- extensible collectives library in triton☆95Updated 10 months ago
- Fast and Furious AMD Kernels☆348Updated 2 weeks ago
- Parallel framework for training and fine-tuning deep neural networks☆70Updated 2 months ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆49Updated 5 months ago
- Effective transpose on Hopper GPU☆27Updated 5 months ago
- ☆21Updated 11 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆197Updated 8 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆68Updated this week
- TORCH_TRACE parser for PT2☆75Updated last week
- Quantized LLM training in pure CUDA/C++.☆235Updated 2 weeks ago
- A FlashAttention implementation for JAX with support for efficient document mask computation and context parallelism.☆157Updated 2 months ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆200Updated last week
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 4 months ago
- ☆44Updated this week
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆155Updated 2 years ago
- Learning about CUDA by writing PTX code.☆152Updated last year
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆194Updated this week
- Awesome utilities for performance profiling☆199Updated 11 months ago
- Hand-Rolled GPU communications library☆81Updated 2 months ago
- CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning☆417Updated last month
- Learn CUDA with PyTorch☆193Updated this week
- Pipeline parallelism for the minimalist☆38Updated 6 months ago
- High-performance safetensors model loader☆94Updated 3 weeks ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆249Updated 9 months ago