thunlp / TritonBenchLinks
TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators
☆80Updated 3 months ago
Alternatives and similar repositories for TritonBench
Users that are interested in TritonBench are comparing it to the libraries listed below
Sorting:
- PyTorch bindings for CUTLASS grouped GEMM.☆119Updated 3 months ago
- ☆64Updated 4 months ago
- ☆88Updated 10 months ago
- ☆50Updated 4 months ago
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆59Updated 3 weeks ago
- DeeperGEMM: crazy optimized version☆70Updated 4 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆82Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆265Updated 2 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆97Updated 2 months ago
- ☆150Updated last year
- ☆82Updated 7 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆320Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆223Updated this week
- ☆96Updated 4 months ago
- Tile-based language built for AI computation across all scales☆57Updated last week
- A lightweight design for computation-communication overlap.☆167Updated last week
- Estimate MFU for DeepSeekV3☆24Updated 8 months ago
- ☆142Updated 7 months ago
- 16-fold memory access reduction with nearly no loss☆105Updated 5 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆82Updated last year
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆89Updated last week
- Implement Flash Attention using Cute.☆95Updated 9 months ago
- ☆55Updated last year
- Allow torch tensor memory to be released and resumed later☆133Updated last week
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆168Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆141Updated 4 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆333Updated 2 months ago
- ☆111Updated last year
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆92Updated last week
- ☆94Updated 5 months ago