NX-AI / flashrnnLinks
FlashRNN - Fast RNN Kernels with I/O Awareness
☆173Updated 2 months ago
Alternatives and similar repositories for flashrnn
Users that are interested in flashrnn are comparing it to the libraries listed below
Sorting:
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆79Updated last month
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆223Updated 6 months ago
- Accelerated First Order Parallel Associative Scan☆193Updated this week
- ☆156Updated last month
- ☆260Updated 6 months ago
- Normalized Transformer (nGPT)☆194Updated last year
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆141Updated last month
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆294Updated 6 months ago
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆94Updated 6 months ago
- RWKV-7: Surpassing GPT☆102Updated last year
- Load compute kernels from the Hub☆352Updated last week
- Supporting code for the blog post on modular manifolds.☆107Updated 3 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Ro…☆46Updated 3 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆404Updated 3 months ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆134Updated 2 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆133Updated last month
- ☆69Updated last year
- The evaluation framework for training-free sparse attention in LLMs☆108Updated 2 months ago
- JAX bindings for Flash Attention v2☆101Updated this week
- Explorations into whether a transformer with RL can direct a genetic algorithm to converge faster☆71Updated 7 months ago
- 📄Small Batch Size Training for Language Models☆69Updated 2 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- Work in progress.☆75Updated last month
- Fast modular code to create and train cutting edge LLMs☆68Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆89Updated last year
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆337Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 8 months ago
- Flash Attention Triton kernel with support for second-order derivatives☆125Updated last week
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year