IST-DASLab / llmqLinks
Quantized LLM training in pure CUDA/C++.
☆32Updated this week
Alternatives and similar repositories for llmq
Users that are interested in llmq are comparing it to the libraries listed below
Sorting:
- Learning about CUDA by writing PTX code.☆137Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 6 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 4 months ago
- High-Performance SGEMM on CUDA devices☆103Updated 8 months ago
- Learn CUDA with PyTorch☆84Updated last week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆57Updated this week
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆142Updated last year
- ☆89Updated last year
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆226Updated 4 months ago
- SIMD quantization kernels☆87Updated 3 weeks ago
- A bunch of kernels that might make stuff slower 😉☆59Updated this week
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆417Updated 6 months ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆97Updated this week
- LLM training in simple, raw C/CUDA☆105Updated last year
- Simple MPI implementation for prototyping or learning☆280Updated last month
- ☆173Updated last year
- Dion optimizer algorithm☆360Updated this week
- Fast low-bit matmul kernels in Triton☆373Updated last week
- ring-attention experiments☆152Updated 11 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆68Updated 4 months ago
- Step by step implementation of a fast softmax kernel in CUDA☆50Updated 8 months ago
- ☆240Updated this week
- Cataloging released Triton kernels.☆261Updated 3 weeks ago
- 👷 Build compute kernels☆149Updated this week
- PyTorch Single Controller☆425Updated this week
- My submission for the GPUMODE/AMD fp8 mm challenge☆29Updated 3 months ago
- coding CUDA everyday!☆61Updated 5 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 5 months ago
- Load compute kernels from the Hub☆290Updated last week
- ☆78Updated last week