Dao-AILab / gemm-cublasLinks
☆22Updated 3 months ago
Alternatives and similar repositories for gemm-cublas
Users that are interested in gemm-cublas are comparing it to the libraries listed below
Sorting:
- ☆123Updated 2 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆152Updated last month
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆110Updated last month
- Odysseus: Playground of LLM Sequence Parallelism☆72Updated last year
- ☆50Updated 2 months ago
- A bunch of kernels that might make stuff slower 😉☆56Updated last week
- The evaluation framework for training-free sparse attention in LLMs☆88Updated last month
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆42Updated last month
- ☆232Updated 2 months ago
- FlexAttention w/ FlashAttention3 Support☆27Updated 10 months ago
- Quantized Attention on GPU☆44Updated 8 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆71Updated last year
- 🎬 3.7× faster video generation E2E 🖼️ 1.6× faster image generation E2E ⚡ ColumnSparseAttn 9.3× vs FlashAttn‑3 💨 ColumnSparseGEMM 2.5× …☆78Updated last month
- ☆75Updated 2 months ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- ☆32Updated last year
- ring-attention experiments☆146Updated 9 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆79Updated 2 weeks ago
- Awesome Triton Resources☆32Updated 3 months ago
- ☆79Updated 5 months ago
- Linear Attention Sequence Parallelism (LASP)☆85Updated last year
- ☆39Updated 4 months ago
- Transformers components but in Triton☆34Updated 2 months ago
- Benchmark tests supporting the TiledCUDA library.☆16Updated 8 months ago
- Fast and memory-efficient exact attention☆69Updated 5 months ago
- Triton implementation of FlashAttention2 that adds Custom Masks.☆128Updated 11 months ago
- Using FlexAttention to compute attention with different masking patterns☆44Updated 10 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- extensible collectives library in triton☆88Updated 4 months ago
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆24Updated last month