tinygrad / open-gpu-kernel-modules
NVIDIA Linux open GPU with P2P support
☆914Updated 5 months ago
Related projects ⓘ
Alternatives and complementary repositories for open-gpu-kernel-modules
- ☆1,003Updated 3 weeks ago
- Tile primitives for speedy kernels☆1,661Updated this week
- Flash Attention in ~100 lines of CUDA (forward pass only)☆626Updated 7 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆702Updated this week
- ☆505Updated 3 weeks ago
- Open weights language model from Google DeepMind, based on Griffin.☆607Updated 4 months ago
- ☆382Updated this week
- Distributed Training Over-The-Internet☆688Updated 2 months ago
- ☆234Updated 8 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆483Updated 3 weeks ago
- Serving multiple LoRA finetuned LLM as one☆986Updated 6 months ago
- Large-scale LLM inference engine☆1,140Updated this week
- A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations☆737Updated last week
- Puzzles for learning Triton☆1,138Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆3,680Updated this week
- FlashAttention (Metal Port)☆387Updated last month
- An implementation of bucketMul LLM inference☆214Updated 4 months ago
- llama3.np is a pure NumPy implementation for Llama 3 model.☆976Updated 5 months ago
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆526Updated this week
- llama3.cuda is a pure C/CUDA implementation for Llama 3 model.☆309Updated 5 months ago
- Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference?☆1,063Updated 6 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆624Updated 2 months ago
- FlashInfer: Kernel Library for LLM Serving☆1,452Updated this week
- Felafax is building AI infra for non-NVIDIA GPUs☆509Updated this week
- Llama 2 Everywhere (L2E)☆1,511Updated 3 weeks ago
- ☆179Updated 2 months ago
- nanoGPT style version of Llama 3.1☆1,246Updated 3 months ago
- PyTorch native quantization and sparsity for training and inference☆1,585Updated this week
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆636Updated this week