aryagxr / cudaLinks
coding CUDA everyday!
☆61Updated 4 months ago
Alternatives and similar repositories for cuda
Users that are interested in cuda are comparing it to the libraries listed below
Sorting:
- making the official triton tutorials actually comprehensible☆54Updated 3 weeks ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆219Updated 4 months ago
- ☆199Updated 8 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆405Updated 6 months ago
- Cataloging released Triton kernels.☆252Updated last week
- Learn CUDA with PyTorch☆74Updated last week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 5 months ago
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆69Updated last month
- ☆234Updated this week
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆87Updated last week
- ring-attention experiments☆150Updated 10 months ago
- Learning about CUDA by writing PTX code.☆135Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 3 months ago
- Fast low-bit matmul kernels in Triton☆365Updated this week
- ☆117Updated 5 months ago
- Learnings and programs related to CUDA☆418Updated 2 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆221Updated this week
- GPU Kernels☆193Updated 4 months ago
- Applied AI experiments and examples for PyTorch☆295Updated 3 weeks ago
- TritonParse: A Compiler Tracer, Visualizer, and mini-Reproducer Generator(WIP) for Triton Kernels☆150Updated this week
- ☆50Updated 8 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆557Updated 2 weeks ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆212Updated last month
- ☆168Updated last year
- Collection of kernels written in Triton language☆154Updated 5 months ago
- ☆39Updated last month
- Write a fast kernel and run it on Discord. See how you compare against the best!☆55Updated this week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆269Updated last month
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆189Updated last year
- a minimal cache manager for PagedAttention, on top of llama3.☆120Updated last year