IST-DASLab / llmqLinks
Quantized LLM training in pure CUDA/C++.
☆221Updated last week
Alternatives and similar repositories for llmq
Users that are interested in llmq are comparing it to the libraries listed below
Sorting:
- Learning about CUDA by writing PTX code.☆150Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 9 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆64Updated last week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆244Updated 7 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 6 months ago
- coding CUDA everyday!☆71Updated 2 weeks ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆177Updated this week
- ☆263Updated this week
- SIMD quantization kernels☆93Updated 3 months ago
- Learn CUDA with PyTorch☆124Updated 3 weeks ago
- Dion optimizer algorithm☆404Updated last week
- Simple MPI implementation for prototyping or learning☆293Updated 4 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆327Updated last month
- High-Performance SGEMM on CUDA devices☆113Updated 11 months ago
- ☆81Updated last week
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆196Updated 2 years ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆153Updated 2 years ago
- Helpful kernel tutorials and examples for tile-based GPU programming☆456Updated this week
- MoE training for Me and You and maybe other people☆239Updated last week
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆441Updated 9 months ago
- Fast low-bit matmul kernels in Triton☆410Updated this week
- ☆178Updated last year
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆178Updated this week
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 3 months ago
- ☆127Updated 2 months ago
- Ship correct and fast LLM kernels to PyTorch☆126Updated this week
- Cataloging released Triton kernels.☆278Updated 3 months ago
- Step by step implementation of a fast softmax kernel in CUDA☆59Updated 11 months ago
- ☆82Updated 2 weeks ago
- Hand-Rolled GPU communications library☆76Updated 3 weeks ago