thunlp / TritonBenchLinks
TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators
☆100Updated 6 months ago
Alternatives and similar repositories for TritonBench
Users that are interested in TritonBench are comparing it to the libraries listed below
Sorting:
- Autonomous GPU Kernel Generation via Deep Agents☆197Updated last week
- PyTorch bindings for CUTLASS grouped GEMM.☆135Updated 7 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆91Updated 3 months ago
- ☆65Updated 8 months ago
- Building the Virtuous Cycle for AI-driven LLM Systems☆103Updated last week
- ☆100Updated last year
- 16-fold memory access reduction with nearly no loss☆109Updated 9 months ago
- ☆116Updated 7 months ago
- Accelerating MoE with IO and Tile-aware Optimizations☆500Updated last week
- ☆125Updated 4 months ago
- ☆157Updated 10 months ago
- DeeperGEMM: crazy optimized version☆74Updated 8 months ago
- Estimate MFU for DeepSeekV3☆26Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆308Updated last week
- Distributed MoE in a Single Kernel [NeurIPS '25]☆171Updated last week
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆71Updated last month
- Collection of kernels written in Triton language☆174Updated 9 months ago
- ☆39Updated 3 weeks ago
- ☆35Updated 9 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆104Updated 6 months ago
- ☆52Updated 7 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆90Updated last year
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆39Updated last month
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆160Updated 2 months ago
- Implement Flash Attention using Cute.☆100Updated last year
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆80Updated last week
- ☆69Updated this week
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆361Updated 5 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆155Updated last month
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Updated 5 months ago