tinygrad / open-gpu-kernel-modulesLinks
NVIDIA Linux open GPU with P2P support
☆1,155Updated this week
Alternatives and similar repositories for open-gpu-kernel-modules
Users that are interested in open-gpu-kernel-modules are comparing it to the libraries listed below
Sorting:
- Tile primitives for speedy kernels☆2,420Updated this week
- Flash Attention in ~100 lines of CUDA (forward pass only)☆833Updated 5 months ago
- ☆445Updated last month
- ☆1,040Updated 2 weeks ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,196Updated last week
- Official implementation of Half-Quadratic Quantization (HQQ)☆818Updated this week
- CUDA/Metal accelerated language model inference☆581Updated last week
- llama.cpp fork with additional SOTA quants and improved performance☆519Updated this week
- FlashAttention (Metal Port)☆492Updated 8 months ago
- FlashInfer: Kernel Library for LLM Serving☆3,088Updated this week
- Large-scale LLM inference engine☆1,435Updated last week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆831Updated 9 months ago
- Serving multiple LoRA finetuned LLM as one☆1,062Updated last year
- ☆536Updated 7 months ago
- NVIDIA Linux open GPU with P2P support☆25Updated 2 weeks ago
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆375Updated last week
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆850Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆544Updated last week
- llama3.cuda is a pure C/CUDA implementation for Llama 3 model.☆331Updated last month
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆639Updated last month
- Distributed Training Over-The-Internet☆932Updated 3 weeks ago
- Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.☆2,074Updated last month
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,878Updated last year
- Stop messing around with finicky sampling parameters and just use DRµGS!☆349Updated last year
- Stateful load balancer custom-tailored for llama.cpp 🏓🦙☆764Updated last week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,450Updated this week
- scalable and robust tree-based speculative decoding algorithm☆345Updated 4 months ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆619Updated last month
- AI Tensor Engine for ROCm☆201Updated this week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,176Updated 3 weeks ago