aikitoria / open-gpu-kernel-modulesLinks
NVIDIA Linux open GPU with P2P support
☆98Updated 3 weeks ago
Alternatives and similar repositories for open-gpu-kernel-modules
Users that are interested in open-gpu-kernel-modules are comparing it to the libraries listed below
Sorting:
- REAP: Router-weighted Expert Activation Pruning for SMoE compression☆151Updated 2 weeks ago
- Sparse Inferencing for transformer based LLMs☆215Updated 4 months ago
- ☆48Updated 2 weeks ago
- ☆159Updated 6 months ago
- DFloat11 [NeurIPS '25]: Lossless Compression of LLMs and DiTs for Efficient GPU Inference☆576Updated last month
- LLM Inference on consumer devices☆128Updated 9 months ago
- ☆66Updated 6 months ago
- Fast and memory-efficient exact attention☆203Updated 3 weeks ago
- InferX: Inference as a Service Platform☆143Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆225Updated this week
- QuIP quantization☆61Updated last year
- A pipeline parallel training script for LLMs.☆164Updated 7 months ago
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆47Updated last month
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆319Updated last month
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆50Updated last year
- Samples of good AI generated CUDA kernels☆95Updated 6 months ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- automatically quant GGUF models☆219Updated 2 months ago
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆605Updated 2 weeks ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆94Updated this week
- ☆113Updated last month
- Bamboo-7B Large Language Model☆93Updated last year
- 1.58-bit LLaMa model☆83Updated last year
- High-speed and easy-use LLM serving framework for local deployment☆139Updated 4 months ago
- Simple high-throughput inference library☆153Updated 7 months ago
- [WIP] Better (FP8) attention for Hopper☆32Updated 10 months ago
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆85Updated this week
- Gpu benchmark☆73Updated 11 months ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆266Updated last week
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆150Updated 5 months ago