IST-DASLab / llmqLinks
Quantized LLM training in pure CUDA/C++.
☆206Updated this week
Alternatives and similar repositories for llmq
Users that are interested in llmq are comparing it to the libraries listed below
Sorting:
- Learning about CUDA by writing PTX code.☆144Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 6 months ago
- SIMD quantization kernels☆87Updated last month
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 4 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆58Updated last week
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆145Updated 2 years ago
- Learn CUDA with PyTorch☆92Updated 3 weeks ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆99Updated last week
- Simple MPI implementation for prototyping or learning☆286Updated 2 months ago
- High-Performance SGEMM on CUDA devices☆107Updated 9 months ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆131Updated last month
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆190Updated 2 years ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆233Updated 5 months ago
- ☆174Updated last year
- Dion optimizer algorithm☆369Updated 3 weeks ago
- LLM training in simple, raw C/CUDA☆105Updated last year
- PyTorch Single Controller☆438Updated last week
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆103Updated 3 weeks ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 6 months ago
- Fast low-bit matmul kernels in Triton☆381Updated 3 weeks ago
- ring-attention experiments☆154Updated last year
- 👷 Build compute kernels☆163Updated this week
- A bunch of kernels that might make stuff slower 😉☆62Updated this week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆296Updated 2 months ago
- ☆79Updated last month
- Step by step implementation of a fast softmax kernel in CUDA☆52Updated 9 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆68Updated 5 months ago
- ☆240Updated this week
- My submission for the GPUMODE/AMD fp8 mm challenge☆29Updated 4 months ago
- coding CUDA everyday!☆64Updated 6 months ago