Libraries-Openly-Fused / FusedKernelLibraryLinks
Implementation of a methodology that allows all sorts of user defined GPU kernel fusion, for non CUDA programmers.
☆25Updated last week
Alternatives and similar repositories for FusedKernelLibrary
Users that are interested in FusedKernelLibrary are comparing it to the libraries listed below
Sorting:
- ☆99Updated 4 months ago
- 🎬 3.7× faster video generation E2E 🖼️ 1.6× faster image generation E2E ⚡ ColumnSparseAttn 9.3× vs FlashAttn‑3 💨 ColumnSparseGEMM 2.5× …☆86Updated last month
- Quantized Attention on GPU☆44Updated 10 months ago
- ☆98Updated last month
- ☆57Updated last year
- ☆72Updated 6 months ago
- Make SGLang go brrr☆33Updated last week
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆70Updated 2 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆32Updated 10 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆230Updated last week
- Framework to reduce autotune overhead to zero for well known deployments.☆84Updated 2 weeks ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆44Updated 3 months ago
- CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning☆193Updated last month
- TritonParse: A Compiler Tracer, Visualizer, and mini-Reproducer for Triton Kernels☆152Updated this week
- ☆50Updated 4 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆43Updated 3 months ago
- Parallel framework for training and fine-tuning deep neural networks☆64Updated 6 months ago
- [WIP] Better (FP8) attention for Hopper☆33Updated 7 months ago
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆40Updated last month
- The evaluation framework for training-free sparse attention in LLMs☆100Updated 3 months ago
- Fast and memory-efficient exact kmeans☆91Updated last week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆164Updated last week
- ☆30Updated 3 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆50Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆118Updated 3 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆111Updated last week
- ☆129Updated 4 months ago
- 🔥 LLM-powered GPU kernel synthesis: Train models to convert PyTorch ops into optimized Triton kernels via SFT+RL. Multi-turn compilation…☆79Updated last week
- LLM training in simple, raw C/CUDA☆105Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆97Updated 3 months ago