FlashTile is a CUDA Tile IR compiler that is compatible with NVIDIA's tileiras, targeting SM70 through SM121 NVIDIA GPUs.
☆54Feb 6, 2026Updated 3 weeks ago
Alternatives and similar repositories for flashtile
Users that are interested in flashtile are comparing it to the libraries listed below
Sorting:
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 7 months ago
- ☆32Jul 2, 2025Updated 8 months ago
- ☆87Updated this week
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 5 months ago
- A std::execution style runtime context and High Performance RPC Transport for using OpenUCX. Including CUDA/ROCM/... devices with RDMA.☆29Feb 22, 2026Updated last week
- ☆52May 19, 2025Updated 9 months ago
- ☆88May 31, 2025Updated 9 months ago
- A Triton-only attention backend for vLLM☆24Feb 11, 2026Updated 2 weeks ago
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆27Jan 22, 2026Updated last month
- A bunch of kernels that might make stuff slower 😉☆75Feb 18, 2026Updated last week
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 9 months ago
- ☆22May 5, 2025Updated 9 months ago
- Collection of Acceleration Methods for Generative AI☆29Dec 9, 2025Updated 2 months ago
- Wave: Python Domain-Specific Language for High Performance Machine Learning☆45Updated this week
- ☆44Updated this week
- Official Project Page for HLA: Higher-order Linear Attention (https://arxiv.org/abs/2510.27258)☆45Jan 6, 2026Updated last month
- Development repository for the Triton-Linalg conversion☆215Feb 7, 2025Updated last year
- My submission for the GPUMODE/AMD fp8 mm challenge☆29Jun 4, 2025Updated 8 months ago
- A collection of Ethereum Virtual Machine benchmarks☆22Jun 7, 2024Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 8 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆60Mar 25, 2025Updated 11 months ago
- DeepSeek-V3/R1 inference performance simulator☆179Mar 27, 2025Updated 11 months ago
- ☆111Updated this week
- Unofficial description of the CUDA assembly (SASS) instruction sets.☆201Jul 18, 2025Updated 7 months ago
- performance engineering☆30Jul 11, 2024Updated last year
- triton for dsa☆57Feb 12, 2026Updated 2 weeks ago
- Tile-based language built for AI computation across all scales☆138Updated this week
- A collection of specialized agent skills for AI infrastructure development, enabling Claude Code to write, optimize, and debug high-perfo…☆61Feb 2, 2026Updated last month
- Next-Toggle is just a simple plug and use, theme toggle button with multiple light and dark themes.☆11May 9, 2024Updated last year
- Optimize GEMM with tensorcore step by step☆36Dec 17, 2023Updated 2 years ago
- Fast and memory-efficient exact kmeans☆140Feb 18, 2026Updated last week
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆79Aug 12, 2024Updated last year
- FlagGems is an operator library for large language models implemented in the Triton Language.☆904Updated this week
- ☆53Feb 24, 2026Updated last week
- 注释的nano_vllm仓库,并且完成了MiniCPM4的适配以及注册新模型的功能☆166Aug 11, 2025Updated 6 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,371Feb 13, 2026Updated 2 weeks ago
- From Minimal GEMM to Everything☆163Feb 10, 2026Updated 3 weeks ago
- 详细双语注释版word2vec源码,well-annotated word2vec☆10Oct 3, 2021Updated 4 years ago
- ☆28Dec 3, 2025Updated 3 months ago