IBM / triton-dejavu
Framework to reduce autotune overhead to zero for well known deployments.
☆19Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for triton-dejavu
- extensible collectives library in triton☆65Updated last month
- ☆43Updated last week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆16Updated this week
- Boosting 4-bit inference kernels with 2:4 Sparsity☆51Updated 2 months ago
- ☆55Updated 5 months ago
- Simple and fast low-bit matmul kernels in CUDA / Triton☆140Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆53Updated last week
- Applied AI experiments and examples for PyTorch☆160Updated last week
- Collection of kernels written in Triton language☆63Updated 2 weeks ago
- ☆46Updated last month
- TensorRT LLM Benchmark Configuration☆11Updated 3 months ago
- GPTQ inference TVM kernel☆35Updated 6 months ago
- Memory Optimizations for Deep Learning (ICML 2023)☆59Updated 8 months ago
- ☆88Updated 2 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆87Updated 4 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆46Updated this week
- ☆156Updated last year
- Cataloging released Triton kernels.☆133Updated 2 months ago
- Repository for CPU Kernel Generation for LLM Inference☆24Updated last year
- ☆12Updated this week
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆35Updated 6 months ago
- ☆11Updated last month
- ☆79Updated 2 months ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆96Updated this week
- Odysseus: Playground of LLM Sequence Parallelism☆55Updated 4 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆34Updated 2 years ago
- ☆48Updated 8 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆184Updated last month
- python package of rocm-smi-lib☆18Updated last month