NX-AI / flashrnn
FlashRNN - Fast RNN Kernels with I/O Awareness
☆76Updated last week
Alternatives and similar repositories for flashrnn:
Users that are interested in flashrnn are comparing it to the libraries listed below
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆50Updated last week
- Accelerated First Order Parallel Associative Scan☆180Updated 7 months ago
- Experiment of using Tangent to autodiff triton☆78Updated last year
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆143Updated 2 weeks ago
- Load compute kernels from the Hub☆107Updated this week
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆23Updated last month
- FlexAttention w/ FlashAttention3 Support☆26Updated 5 months ago
- ☆76Updated 8 months ago
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 8 months ago
- A library for unit scaling in PyTorch☆125Updated 4 months ago
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆122Updated 7 months ago
- JAX bindings for Flash Attention v2☆89Updated 8 months ago
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆80Updated this week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆107Updated this week
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated 2 weeks ago
- Fast and memory-efficient exact attention☆67Updated 3 weeks ago
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 11 months ago
- ☆98Updated 10 months ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆103Updated 4 months ago
- supporting pytorch FSDP for optimizers☆80Updated 3 months ago
- DPO, but faster 🚀☆40Updated 3 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆87Updated 9 months ago
- ring-attention experiments☆128Updated 5 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆45Updated 8 months ago
- Here we will test various linear attention designs.☆60Updated 11 months ago
- Implementation of Infini-Transformer in Pytorch☆110Updated 2 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆96Updated 7 months ago
- Normalized Transformer (nGPT)☆164Updated 4 months ago
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆80Updated last month