ashvardanian / PyBindToGPUsLinks
Parallel Computing starter project to build GPU & CPU kernels in CUDA & C++ and call them from Python without a single line of CMake using PyBind11
☆29Updated this week
Alternatives and similar repositories for PyBindToGPUs
Users that are interested in PyBindToGPUs are comparing it to the libraries listed below
Sorting:
- LLM training in simple, raw C/CUDA☆105Updated last year
- High-Performance SGEMM on CUDA devices☆107Updated 8 months ago
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆64Updated 6 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆58Updated this week
- Effective transpose on Hopper GPU☆25Updated last month
- Fast and vectorizable algorithms for searching in a vector of sorted floating point numbers☆151Updated 10 months ago
- PyTorch Single Controller☆438Updated this week
- Awesome utilities for performance profiling☆194Updated 7 months ago
- ☆21Updated 7 months ago
- A minimalistic C++ Jinja templating engine for LLM chat templates☆189Updated 3 weeks ago
- Parallel framework for training and fine-tuning deep neural networks☆65Updated 7 months ago
- extensible collectives library in triton☆89Updated 6 months ago
- A FlashAttention implementation for JAX with support for efficient document mask computation and context parallelism.☆146Updated 6 months ago
- A list of awesome resources and blogs on topics related to Unum☆41Updated last year
- ☆39Updated this week
- Quantized LLM training in pure CUDA/C++.☆198Updated last week
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆45Updated 2 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆145Updated last year
- Lightweight Llama 3 8B Inference Engine in CUDA C☆48Updated 6 months ago
- Hand-Rolled GPU communications library☆39Updated this week
- python package of rocm-smi-lib☆24Updated 3 months ago
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- Implementation of the paper "Lossless Compression of Vector IDs for Approximate Nearest Neighbor Search" by Severo et al.☆82Updated 8 months ago
- ScalarLM - a unified training and inference stack☆85Updated 2 weeks ago
- Custom PTX Instruction Benchmark☆129Updated 7 months ago
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆54Updated 2 weeks ago
- Make triton easier☆48Updated last year
- How to ensure correctness and ship LLM generated kernels in PyTorch☆66Updated this week
- Example ML projects that use the Determined library.☆32Updated last year
- Memory Optimizations for Deep Learning (ICML 2023)☆108Updated last year