A bunch of kernels that might make stuff slower π
β88Apr 24, 2026Updated last week
Alternatives and similar repositories for accelerated-model-architectures
Users that are interested in accelerated-model-architectures are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Transformers components but in Tritonβ34May 9, 2025Updated 11 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.β108Jun 28, 2025Updated 10 months ago
- Expanding linear RNN state-transition matrix eigenvalues to include negatives improves state-tracking tasks and language modeling withoutβ¦β21Mar 15, 2025Updated last year
- Awesome Triton Resourcesβ40Apr 27, 2025Updated last year
- A Quirky Assortment of CuTe Kernelsβ955Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer β’ AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- β57Feb 24, 2026Updated 2 months ago
- Variable-order CRFs with structure learningβ17Aug 1, 2024Updated last year
- β12Jan 29, 2021Updated 5 years ago
- FlashTile is a CUDA Tile IR compiler that is compatible with NVIDIA's tileiras, targeting SM70 through SM121 NVIDIA GPUs.β60Feb 6, 2026Updated 2 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.β129Jul 13, 2024Updated last year
- Automating analysis from trace filesβ74Updated this week
- Official repository Flash Local Linear Attentionβ23Apr 23, 2026Updated last week
- β265Jul 11, 2024Updated last year
- β18Nov 11, 2025Updated 5 months ago
- Managed Database hosting by DigitalOcean β’ AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Pytorch routines for (Ker)nel (Mac)hinesβ12Oct 10, 2025Updated 6 months ago
- β16May 14, 2024Updated last year
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdfβ21Jul 29, 2024Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.β351Updated this week
- Framework to reduce autotune overhead to zero for well known deployments.β99Sep 19, 2025Updated 7 months ago
- High-performance LLM operator library built on TileLang.β111Updated this week
- β28Jan 17, 2025Updated last year
- Fast low-bit matmul kernels in Tritonβ446Apr 27, 2026Updated last week
- FlashRNN - Fast RNN Kernels with I/O Awarenessβ181Oct 20, 2025Updated 6 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer β’ AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- DeeperGEMM: crazy optimized versionβ86May 5, 2025Updated 11 months ago
- Official Project Page for HLA: Higher-order Linear Attention (https://arxiv.org/abs/2510.27258)β48Jan 6, 2026Updated 3 months ago
- Automatic differentiation for Triton Kernelsβ29Aug 12, 2025Updated 8 months ago
- Official Repository for Efficient Linear-Time Attention Transformers.β18Jun 2, 2024Updated last year
- LM engine is a library for pretraining/finetuning LLMsβ165Updated this week
- Cataloging released Triton kernels.β302Sep 9, 2025Updated 7 months ago
- Distributed Compiler based on Triton for Parallel Systemsβ1,420Apr 22, 2026Updated last week
- A SystemVerilog implementation of MIPS32 CPU and RIP routerβ22Jan 12, 2020Updated 6 years ago
- An experimental communicating attention kernel based on DeepEP.β34Jul 29, 2025Updated 9 months ago
- Deploy open-source AI quickly and easily - Special Bonus Offer β’ AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- β111Mar 12, 2026Updated last month
- FlexAttention w/ FlashAttention3 Supportβ27Oct 5, 2024Updated last year
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernelsβ203Apr 28, 2026Updated last week
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"β18Mar 15, 2024Updated 2 years ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's lβ¦β57Mar 31, 2026Updated last month
- GEMV implementation with CUTLASSβ21Aug 21, 2025Updated 8 months ago
- An MLIR-based AI compiler designed for Python frontend to RISC-V DSAβ14Oct 10, 2024Updated last year