A bunch of kernels that might make stuff slower π
β79Mar 19, 2026Updated this week
Alternatives and similar repositories for accelerated-model-architectures
Users that are interested in accelerated-model-architectures are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Transformers components but in Tritonβ34May 9, 2025Updated 10 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.β106Jun 28, 2025Updated 8 months ago
- Expanding linear RNN state-transition matrix eigenvalues to include negatives improves state-tracking tasks and language modeling withoutβ¦β21Mar 15, 2025Updated last year
- Awesome Triton Resourcesβ39Apr 27, 2025Updated 10 months ago
- A Quirky Assortment of CuTe Kernelsβ863Updated this week
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- β53Feb 24, 2026Updated last month
- Variable-order CRFs with structure learningβ17Aug 1, 2024Updated last year
- β12Jan 29, 2021Updated 5 years ago
- Automating analysis from trace filesβ64Updated this week
- FlashTile is a CUDA Tile IR compiler that is compatible with NVIDIA's tileiras, targeting SM70 through SM121 NVIDIA GPUs.β58Feb 6, 2026Updated last month
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.β128Jul 13, 2024Updated last year
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressiβ¦β23Oct 1, 2025Updated 5 months ago
- β261Jul 11, 2024Updated last year
- β18Nov 11, 2025Updated 4 months ago
- Virtual machines for every use case on DigitalOcean β’ AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Pytorch routines for (Ker)nel (Mac)hinesβ11Oct 10, 2025Updated 5 months ago
- High-performance LLM operator library built on TileLang.β93Updated this week
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdfβ21Jul 29, 2024Updated last year
- β16May 14, 2024Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.β332Updated this week
- Framework to reduce autotune overhead to zero for well known deployments.β97Sep 19, 2025Updated 6 months ago
- β28Jan 17, 2025Updated last year
- DeeperGEMM: crazy optimized versionβ75May 5, 2025Updated 10 months ago
- Fast low-bit matmul kernels in Tritonβ438Feb 1, 2026Updated last month
- End-to-end encrypted email - Proton Mail β’ AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Official Project Page for HLA: Higher-order Linear Attention (https://arxiv.org/abs/2510.27258)β45Jan 6, 2026Updated 2 months ago
- Automatic differentiation for Triton Kernelsβ29Aug 12, 2025Updated 7 months ago
- Official Repository for Efficient Linear-Time Attention Transformers.β18Jun 2, 2024Updated last year
- LM engine is a library for pretraining/finetuning LLMsβ136Mar 18, 2026Updated last week
- Cataloging released Triton kernels.β298Sep 9, 2025Updated 6 months ago
- Distributed Compiler based on Triton for Parallel Systemsβ1,394Mar 11, 2026Updated 2 weeks ago
- An experimental communicating attention kernel based on DeepEP.β35Jul 29, 2025Updated 7 months ago
- β105Mar 12, 2026Updated last week
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernelsβ197Updated this week
- NordVPN Threat Protection Proβ’ β’ AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- FlexAttention w/ FlashAttention3 Supportβ27Oct 5, 2024Updated last year
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"β18Mar 15, 2024Updated 2 years ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's lβ¦β56Updated this week
- GEMV implementation with CUTLASSβ19Aug 21, 2025Updated 7 months ago
- β11Oct 11, 2023Updated 2 years ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based β¦β11Mar 18, 2023Updated 3 years ago
- β20May 24, 2025Updated 10 months ago