pytorch-labs / tritonbench
Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.
☆16Updated this week
Related projects ⓘ
Alternatives and complementary repositories for tritonbench
- extensible collectives library in triton☆65Updated last month
- Boosting 4-bit inference kernels with 2:4 Sparsity☆51Updated 2 months ago
- ☆43Updated last week
- Simple and fast low-bit matmul kernels in CUDA / Triton☆140Updated this week
- ☆46Updated last month
- Applied AI experiments and examples for PyTorch☆160Updated last week
- ☆55Updated 5 months ago
- FlexAttention w/ FlashAttention3 Support☆27Updated last month
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆87Updated 4 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆184Updated last month
- ☆88Updated 2 months ago
- ☆11Updated last month
- A safetensors extension to efficiently store sparse quantized tensors on disk☆46Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆53Updated last week
- Framework to reduce autotune overhead to zero for well known deployments.☆19Updated 3 weeks ago
- ☆156Updated last year
- Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆70Updated last week
- Unit Scaling demo and experimentation code☆16Updated 8 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆184Updated last month
- Experiment of using Tangent to autodiff triton