moonquest-ai / SRDALinks
☆29Updated 3 months ago
Alternatives and similar repositories for SRDA
Users that are interested in SRDA are comparing it to the libraries listed below
Sorting:
- ☆97Updated 4 months ago
- ☆107Updated last month
- ☆64Updated 5 months ago
- DeeperGEMM: crazy optimized version☆70Updated 4 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆82Updated last week
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆67Updated this week
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆50Updated last year
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆74Updated this week
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆218Updated last month
- Quantized Attention on GPU☆44Updated 10 months ago
- ☆50Updated 4 months ago
- TritonParse: A Compiler Tracer, Visualizer, and mini-Reproducer (WIP) for Triton Kernels☆150Updated last week
- Tile-based language built for AI computation across all scales☆59Updated last week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆97Updated 3 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆116Updated 4 months ago
- TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators☆81Updated 3 months ago
- Triton multi-level runner, include IR/PTX/cubin.☆54Updated this week
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆71Updated 3 months ago
- An experimental communicating attention kernel based on DeepEP.☆34Updated last month
- triton for dsa☆41Updated last week
- ☆95Updated 6 months ago
- It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically.☆30Updated last month
- FractalTensor is a programming framework that introduces a novel approach to organizing data in deep neural networks (DNNs) as a list of …☆27Updated 9 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆43Updated 3 months ago
- ☆42Updated 4 months ago
- ☆19Updated last year
- Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Award, Honorable Mention]☆37Updated 6 months ago
- ☆82Updated 8 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆115Updated last year
- ☆12Updated 8 months ago