ashvardanian / PyBindToGPUsLinks
Parallel Computing starter project to build GPU & CPU kernels in CUDA & C++ and call them from Python without a single line of CMake using PyBind11
☆31Updated 3 months ago
Alternatives and similar repositories for PyBindToGPUs
Users that are interested in PyBindToGPUs are comparing it to the libraries listed below
Sorting:
- LLM training in simple, raw C/CUDA☆112Updated last year
- High-Performance FP32 GEMM on CUDA devices☆117Updated last year
- Fast and vectorizable algorithms for searching in a vector of sorted floating point numbers☆153Updated last year
- Awesome utilities for performance profiling☆199Updated 11 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆155Updated 2 years ago
- Fast and Furious AMD Kernels☆348Updated 2 weeks ago
- Effective transpose on Hopper GPU☆27Updated 5 months ago
- Learning about CUDA by writing PTX code.☆151Updated last year
- Hand-Rolled GPU communications library☆81Updated 2 months ago
- Quantized LLM training in pure CUDA/C++.☆235Updated 2 weeks ago
- A list of awesome resources and blogs on topics related to Unum☆45Updated 2 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆68Updated this week
- Parallel framework for training and fine-tuning deep neural networks☆70Updated 2 months ago
- A FlashAttention implementation for JAX with support for efficient document mask computation and context parallelism.☆157Updated 2 months ago
- extensible collectives library in triton☆95Updated 10 months ago
- Implementation of the paper "Lossless Compression of Vector IDs for Approximate Nearest Neighbor Search" by Severo et al.☆89Updated 3 weeks ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆197Updated 8 months ago
- Cuda extensions for PyTorch☆12Updated 2 months ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆200Updated last week
- A lightweight, user-friendly data-plane for LLM training.☆38Updated 4 months ago
- CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning☆417Updated 3 weeks ago
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- ☆21Updated 11 months ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆49Updated 5 months ago
- ☆92Updated last year
- We aim to redefine Data Parallel libraries portabiliy, performance, programability and maintainability, by using C++ standard features, i…☆46Updated last week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 10 months ago
- Some CUDA example code with READMEs.☆179Updated 2 months ago
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆63Updated 2 weeks ago
- TORCH_TRACE parser for PT2☆75Updated last week