NVIDIA Linux open GPU with P2P support
☆1,361Jun 6, 2025Updated 10 months ago
Alternatives and similar repositories for open-gpu-kernel-modules
Users that are interested in open-gpu-kernel-modules are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- NVIDIA Linux open GPU with P2P support☆193Apr 5, 2026Updated last month
- Large-scale LLM inference engine☆1,714Updated this week
- Tile primitives for speedy kernels☆3,336Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,511Mar 4, 2026Updated 2 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,544Updated this week
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- You like pytorch? You like micrograd? You love tinygrad! ❤️☆32,603Updated this week
- Development repository for the Triton language and compiler☆19,087Updated this week
- ☆1,087May 18, 2025Updated 11 months ago
- A throughput-oriented high-performance serving framework for LLMs☆956Mar 29, 2026Updated last month
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,234Updated this week
- Efficient Triton Kernels for LLM Training☆6,315Apr 27, 2026Updated last week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,545Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆78,979Updated this week
- High-speed Large Language Model Serving for Local Deployment☆9,390Jan 24, 2026Updated 3 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Fast and memory-efficient exact attention☆23,628Updated this week
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,915Sep 30, 2023Updated 2 years ago
- LLM training in simple, raw C/CUDA☆29,780Jun 26, 2025Updated 10 months ago
- PyTorch native quantization and sparsity for training and inference☆2,807Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆26,832Updated this week
- A tool for bandwidth measurements on NVIDIA GPUs.☆689Apr 8, 2026Updated 3 weeks ago
- A lightweight design for computation-communication overlap.☆229Jan 20, 2026Updated 3 months ago
- Tensor library for machine learning☆14,560Updated this week
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,366Oct 28, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆454Apr 6, 2025Updated last year
- NVIDIA Inference Xfer Library (NIXL)☆1,011Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,242Updated this week
- LLM inference in C/C++☆107,892Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆8,168Apr 20, 2026Updated 2 weeks ago
- NVIDIA Linux open GPU kernel module source☆16,939Updated this week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,204Aug 22, 2025Updated 8 months ago
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆10,109Sep 7, 2024Updated last year
- DeepEP: an efficient expert-parallel communication library☆9,589Updated this week
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Training LLMs with QLoRA + FSDP☆1,542Nov 9, 2024Updated last year
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,663Apr 25, 2026Updated last week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,065Sep 4, 2024Updated last year
- CUDA checkpoint and restore utility☆446Sep 15, 2025Updated 7 months ago
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,205Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,312Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆600Aug 12, 2025Updated 8 months ago