ashvardanian / PyBindToGPUsLinks
Parallel Computing starter project to build GPU & CPU kernels in CUDA & C++ and call them from Python without a single line of CMake using PyBind11
☆31Updated 3 months ago
Alternatives and similar repositories for PyBindToGPUs
Users that are interested in PyBindToGPUs are comparing it to the libraries listed below
Sorting:
- LLM training in simple, raw C/CUDA☆112Updated last year
- High-Performance FP32 GEMM on CUDA devices☆117Updated last year
- Fast and vectorizable algorithms for searching in a vector of sorted floating point numbers☆153Updated last year
- CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning☆383Updated 3 weeks ago
- Parallel framework for training and fine-tuning deep neural networks☆70Updated 2 months ago
- Hand-Rolled GPU communications library☆81Updated 2 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆155Updated 2 years ago
- A list of awesome resources and blogs on topics related to Unum☆45Updated 2 months ago
- Fast and Furious AMD Kernels☆346Updated last week
- We aim to redefine Data Parallel libraries portabiliy, performance, programability and maintainability, by using C++ standard features, i…☆46Updated this week
- extensible collectives library in triton☆93Updated 10 months ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆200Updated this week
- ☆21Updated 10 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆68Updated last week
- Lightweight Llama 3 8B Inference Engine in CUDA C☆53Updated 10 months ago
- Effective transpose on Hopper GPU☆27Updated 4 months ago
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.☆327Updated 3 weeks ago
- Awesome utilities for performance profiling☆199Updated 10 months ago
- Learning about CUDA by writing PTX code.☆151Updated last year
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆48Updated 5 months ago
- A FlashAttention implementation for JAX with support for efficient document mask computation and context parallelism.☆157Updated 2 months ago
- Thrust, CUB, TBB, AVX2, AVX-512, CUDA, OpenCL, OpenMP, Metal, and Rust - all it takes to sum a lot of numbers fast!☆116Updated 6 months ago
- Quantized LLM training in pure CUDA/C++.☆233Updated last week
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆70Updated 9 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆197Updated 8 months ago
- Pipeline parallelism for the minimalist☆38Updated 5 months ago
- ☆53Updated this week
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆375Updated this week
- Benchmarks to capture important workloads.☆32Updated last week
- Home for OctoML PyTorch Profiler☆113Updated 2 years ago