IBM / triton-dejavuLinks
Framework to reduce autotune overhead to zero for well known deployments.
☆92Updated 3 months ago
Alternatives and similar repositories for triton-dejavu
Users that are interested in triton-dejavu are comparing it to the libraries listed below
Sorting:
- ☆100Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆104Updated 6 months ago
- DeeperGEMM: crazy optimized version☆74Updated 8 months ago
- ☆65Updated 8 months ago
- ☆117Updated 7 months ago
- extensible collectives library in triton☆92Updated 9 months ago
- ☆85Updated 11 months ago
- ☆39Updated last month
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆124Updated last year
- ☆52Updated 7 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆140Updated 7 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆91Updated last year
- TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators☆108Updated 7 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆159Updated 2 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆310Updated this week
- Quantized Attention on GPU☆44Updated last year
- GPTQ inference TVM kernel☆41Updated last year
- ☆78Updated last week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Updated 6 months ago
- ☆104Updated last year
- Collection of kernels written in Triton language☆174Updated 9 months ago
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆41Updated last month
- ☆126Updated 4 months ago
- A bunch of kernels that might make stuff slower 😉☆73Updated last week
- ☆115Updated last year
- Implement Flash Attention using Cute.☆100Updated last year
- A Triton JIT runtime and ffi provider in C++☆30Updated 3 weeks ago
- Autonomous GPU Kernel Generation via Deep Agents☆211Updated this week
- An experimental communicating attention kernel based on DeepEP.☆35Updated 5 months ago
- Ship correct and fast LLM kernels to PyTorch☆132Updated this week