ashvardanian / PyBindToGPUsLinks
Parallel Computing starter project to build GPU & CPU kernels in CUDA & C++ and call them from Python without a single line of CMake using PyBind11
☆30Updated 3 weeks ago
Alternatives and similar repositories for PyBindToGPUs
Users that are interested in PyBindToGPUs are comparing it to the libraries listed below
Sorting:
- LLM training in simple, raw C/CUDA☆107Updated last year
- High-Performance SGEMM on CUDA devices☆109Updated 9 months ago
- Fast and vectorizable algorithms for searching in a vector of sorted floating point numbers☆152Updated 10 months ago
- Cuda extensions for PyTorch☆11Updated 6 months ago
- A list of awesome resources and blogs on topics related to Unum☆42Updated this week
- ☆21Updated 8 months ago
- Parallel framework for training and fine-tuning deep neural networks☆65Updated 2 weeks ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆147Updated 2 years ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆47Updated 2 months ago
- Effective transpose on Hopper GPU☆25Updated 2 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆61Updated last week
- extensible collectives library in triton☆90Updated 7 months ago
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆68Updated 6 months ago
- Learning about CUDA by writing PTX code.☆146Updated last year
- ☆41Updated last week
- A FlashAttention implementation for JAX with support for efficient document mask computation and context parallelism.☆148Updated 6 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 7 months ago
- python package of rocm-smi-lib☆24Updated 3 months ago
- Home for OctoML PyTorch Profiler☆114Updated 2 years ago
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆360Updated this week
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆167Updated last week
- Pipeline parallelism for the minimalist☆35Updated 3 months ago
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆55Updated 3 weeks ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆116Updated last week
- Quantized LLM training in pure CUDA/C++.☆214Updated this week
- Awesome utilities for performance profiling☆195Updated 8 months ago
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.☆305Updated last week
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆138Updated last month
- MLPerf™ logging library☆37Updated 3 weeks ago