NX-AI / flashrnnLinks
FlashRNN - Fast RNN Kernels with I/O Awareness
☆94Updated 2 months ago
Alternatives and similar repositories for flashrnn
Users that are interested in flashrnn are comparing it to the libraries listed below
Sorting:
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆69Updated last week
- Normalized Transformer (nGPT)☆186Updated 9 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆160Updated 2 months ago
- ☆237Updated 2 months ago
- Accelerated First Order Parallel Associative Scan☆187Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆129Updated 8 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆199Updated 5 months ago
- Load compute kernels from the Hub☆244Updated this week
- Work in progress.☆72Updated last month
- The evaluation framework for training-free sparse attention in LLMs☆90Updated 2 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆127Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆88Updated last year
- ring-attention experiments☆149Updated 10 months ago
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆131Updated 2 weeks ago
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- 📄Small Batch Size Training for Language Models☆43Updated this week
- Official implementation for Training LLMs with MXFP4☆77Updated 4 months ago
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆289Updated 2 months ago
- JAX bindings for Flash Attention v2☆90Updated last week
- DPO, but faster 🚀☆44Updated 8 months ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- research impl of Native Sparse Attention (2502.11089)☆60Updated 6 months ago
- ☆123Updated 2 months ago
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- Here we will test various linear attention designs.☆62Updated last year
- Fast and memory-efficient exact attention☆70Updated 5 months ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆46Updated last month
- ☆40Updated 4 months ago
- A bunch of kernels that might make stuff slower 😉☆58Updated last week
- ☆34Updated last year