aikitoria / open-gpu-kernel-modulesLinks
NVIDIA Linux open GPU with P2P support
☆126Updated 2 months ago
Alternatives and similar repositories for open-gpu-kernel-modules
Users that are interested in open-gpu-kernel-modules are comparing it to the libraries listed below
Sorting:
- DFloat11 [NeurIPS '25]: Lossless Compression of LLMs and DiTs for Efficient GPU Inference☆600Updated 2 months ago
- Sparse Inferencing for transformer based LLMs☆218Updated 5 months ago
- REAP: Router-weighted Expert Activation Pruning for SMoE compression☆232Updated last month
- LLM Inference on consumer devices☆129Updated 10 months ago
- ☆163Updated 7 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆238Updated this week
- ☆71Updated 7 months ago
- ☆51Updated last month
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆157Updated 7 months ago
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆51Updated last year
- Fast and memory-efficient exact attention☆214Updated this week
- ☆64Updated 8 months ago
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆49Updated 3 months ago
- A pipeline parallel training script for LLMs.☆166Updated 9 months ago
- automatically quant GGUF models☆219Updated last month
- Block Diffusion for Ultra-Fast Speculative Decoding☆459Updated this week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆626Updated last week
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆327Updated 2 months ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- Samples of good AI generated CUDA kernels☆99Updated 8 months ago
- Lower Precision Floating Point Operations☆66Updated last month
- InferX: Inference as a Service Platform☆154Updated last week
- KV cache compression for high-throughput LLM inference☆151Updated last year
- 🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality degradation across Weight-Only Quantiza…☆839Updated last week
- ☆109Updated 5 months ago
- AI Tensor Engine for ROCm☆348Updated last week
- Bamboo-7B Large Language Model☆93Updated last year
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated this week
- High-speed and easy-use LLM serving framework for local deployment☆145Updated 6 months ago
- QuIP quantization☆61Updated last year