A bunch of kernels that might make stuff slower π
β87Apr 10, 2026Updated this week
Alternatives and similar repositories for accelerated-model-architectures
Users that are interested in accelerated-model-architectures are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Transformers components but in Tritonβ34May 9, 2025Updated 11 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.β106Jun 28, 2025Updated 9 months ago
- Expanding linear RNN state-transition matrix eigenvalues to include negatives improves state-tracking tasks and language modeling withoutβ¦β21Mar 15, 2025Updated last year
- Awesome Triton Resourcesβ39Apr 27, 2025Updated 11 months ago
- A Quirky Assortment of CuTe Kernelsβ924Updated this week
- Managed Kubernetes at scale on DigitalOcean β’ AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- β57Feb 24, 2026Updated last month
- Variable-order CRFs with structure learningβ17Aug 1, 2024Updated last year
- β12Jan 29, 2021Updated 5 years ago
- Automating analysis from trace filesβ66Updated this week
- FlashTile is a CUDA Tile IR compiler that is compatible with NVIDIA's tileiras, targeting SM70 through SM121 NVIDIA GPUs.β59Feb 6, 2026Updated 2 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.β128Jul 13, 2024Updated last year
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressiβ¦β23Oct 1, 2025Updated 6 months ago
- β261Jul 11, 2024Updated last year
- β18Nov 11, 2025Updated 5 months ago
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Pytorch routines for (Ker)nel (Mac)hinesβ11Oct 10, 2025Updated 6 months ago
- β16May 14, 2024Updated last year
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdfβ21Jul 29, 2024Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.β343Updated this week
- High-performance LLM operator library built on TileLang.β98Updated this week
- Framework to reduce autotune overhead to zero for well known deployments.β98Sep 19, 2025Updated 6 months ago
- β28Jan 17, 2025Updated last year
- FlashRNN - Fast RNN Kernels with I/O Awarenessβ179Oct 20, 2025Updated 5 months ago
- Fast low-bit matmul kernels in Tritonβ443Apr 4, 2026Updated last week
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- DeeperGEMM: crazy optimized versionβ86May 5, 2025Updated 11 months ago
- Official Project Page for HLA: Higher-order Linear Attention (https://arxiv.org/abs/2510.27258)β47Jan 6, 2026Updated 3 months ago
- Automatic differentiation for Triton Kernelsβ29Aug 12, 2025Updated 8 months ago
- Official Repository for Efficient Linear-Time Attention Transformers.β18Jun 2, 2024Updated last year
- LM engine is a library for pretraining/finetuning LLMsβ163Apr 8, 2026Updated last week
- Cataloging released Triton kernels.β301Sep 9, 2025Updated 7 months ago
- Distributed Compiler based on Triton for Parallel Systemsβ1,403Updated this week
- A SystemVerilog implementation of MIPS32 CPU and RIP routerβ22Jan 12, 2020Updated 6 years ago
- An experimental communicating attention kernel based on DeepEP.β35Jul 29, 2025Updated 8 months ago
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- β109Mar 12, 2026Updated last month
- FlexAttention w/ FlashAttention3 Supportβ27Oct 5, 2024Updated last year
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernelsβ198Apr 8, 2026Updated last week
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"β18Mar 15, 2024Updated 2 years ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's lβ¦β56Mar 31, 2026Updated 2 weeks ago
- GEMV implementation with CUTLASSβ19Aug 21, 2025Updated 7 months ago
- An MLIR-based AI compiler designed for Python frontend to RISC-V DSAβ13Oct 10, 2024Updated last year