thunlp / TritonBenchLinks
TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators
☆87Updated 4 months ago
Alternatives and similar repositories for TritonBench
Users that are interested in TritonBench are comparing it to the libraries listed below
Sorting:
- ☆65Updated 6 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆125Updated 5 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆84Updated last month
- ☆93Updated 11 months ago
- ☆50Updated 5 months ago
- ☆120Updated 2 months ago
- Implement Flash Attention using Cute.☆96Updated 10 months ago
- ☆35Updated last week
- Estimate MFU for DeepSeekV3☆26Updated 9 months ago
- ☆102Updated 5 months ago
- 16-fold memory access reduction with nearly no loss☆104Updated 7 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆100Updated 4 months ago
- DeeperGEMM: crazy optimized version☆72Updated 5 months ago
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆60Updated last week
- Utility scripts for PyTorch (e.g. Memory profiler that understands more low-level allocations such as NCCL)☆62Updated last month
- ☆154Updated last year
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆120Updated last week
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆148Updated 2 weeks ago
- ☆82Updated 9 months ago
- ☆39Updated 2 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆272Updated this week
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆221Updated 2 years ago
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆75Updated last week
- ☆145Updated 8 months ago
- ☆112Updated last year
- ☆58Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆169Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆117Updated last year
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆81Updated 4 months ago
- Tile-based language built for AI computation across all scales☆74Updated this week