aikitoria / open-gpu-kernel-modulesLinks
NVIDIA Linux open GPU with P2P support
☆83Updated 3 weeks ago
Alternatives and similar repositories for open-gpu-kernel-modules
Users that are interested in open-gpu-kernel-modules are comparing it to the libraries listed below
Sorting:
- DFloat11: Lossless LLM Compression for Efficient GPU Inference☆562Updated this week
- REAP: Router-weighted Expert Activation Pruning for SMoE compression☆119Updated 3 weeks ago
- ☆156Updated 5 months ago
- ☆46Updated last month
- Samples of good AI generated CUDA kernels☆92Updated 6 months ago
- LLM Inference on consumer devices☆125Updated 8 months ago
- Sparse Inferencing for transformer based LLMs☆213Updated 3 months ago
- Bamboo-7B Large Language Model☆92Updated last year
- QuIP quantization☆61Updated last year
- Fast and memory-efficient exact attention☆201Updated last month
- A safetensors extension to efficiently store sparse quantized tensors on disk☆210Updated last week
- InferX: Inference as a Service Platform☆139Updated this week
- ☆63Updated 5 months ago
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆46Updated last month
- A pipeline parallel training script for LLMs.☆163Updated 7 months ago
- automatically quant GGUF models☆217Updated last month
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆48Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 9 months ago
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆586Updated this week
- Gpu benchmark☆73Updated 10 months ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- [WIP] Better (FP8) attention for Hopper☆32Updated 9 months ago
- ☆60Updated 6 months ago
- High-speed and easy-use LLM serving framework for local deployment☆137Updated 3 months ago
- llama.cpp to PyTorch Converter☆34Updated last year
- An OpenAI API compatible LLM inference server based on ExLlamaV2.☆25Updated last year
- ☆77Updated 11 months ago
- KV cache compression for high-throughput LLM inference☆143Updated 9 months ago
- Input your VRAM and RAM and the toolchain will produce a GGUF model tuned to your system within seconds — flexible model sizing and lowes…☆65Updated last week
- ☆107Updated 3 months ago