vedantroy / gpu_kernelsLinks
☆27Updated last year
Alternatives and similar repositories for gpu_kernels
Users that are interested in gpu_kernels are comparing it to the libraries listed below
Sorting:
- Boosting 4-bit inference kernels with 2:4 Sparsity☆85Updated last year
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- Autonomous GPU Kernel Generation via Deep Agents☆137Updated this week
- Framework to reduce autotune overhead to zero for well known deployments.☆85Updated 2 months ago
- ☆109Updated 6 months ago
- Triton-based Symmetric Memory operators and examples☆63Updated last month
- ☆113Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆130Updated 5 months ago
- llama INT4 cuda inference with AWQ☆55Updated 10 months ago
- ☆50Updated 6 months ago
- ☆71Updated 7 months ago
- extensible collectives library in triton☆91Updated 7 months ago
- ☆93Updated last year
- DeeperGEMM: crazy optimized version☆73Updated 6 months ago
- ☆83Updated 9 months ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆243Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆122Updated last year
- ☆130Updated 5 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆100Updated 4 months ago
- GPTQ inference TVM kernel☆39Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆45Updated 5 months ago
- ☆65Updated 6 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆170Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Updated 4 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆134Updated last week
- ring-attention experiments☆155Updated last year
- Quantized Attention on GPU☆44Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆131Updated 11 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆223Updated 2 years ago