NX-AI / mlstm_kernels
Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.
☆54Updated 2 weeks ago
Alternatives and similar repositories for mlstm_kernels:
Users that are interested in mlstm_kernels are comparing it to the libraries listed below
- FlashRNN - Fast RNN Kernels with I/O Awareness☆82Updated 3 weeks ago
- DPO, but faster 🚀☆40Updated 4 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆87Updated 10 months ago
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆82Updated 2 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆123Updated 8 months ago
- Implementation of a Light Recurrent Unit in Pytorch☆47Updated 6 months ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆103Updated 4 months ago
- ☆78Updated 8 months ago
- GoldFinch and other hybrid transformer components☆45Updated 9 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆27Updated 2 months ago
- Here we will test various linear attention designs.☆60Updated last year
- research impl of Native Sparse Attention (2502.11089)☆53Updated 2 months ago
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- supporting pytorch FSDP for optimizers☆80Updated 4 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆156Updated last month
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆48Updated last month
- Normalized Transformer (nGPT)☆168Updated 5 months ago
- Accelerated First Order Parallel Associative Scan☆181Updated 8 months ago
- ☆59Updated 5 months ago
- ☆79Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆37Updated last year
- ☆27Updated last year
- ☆53Updated last month
- ☆52Updated 6 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 11 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆63Updated last year
- A State-Space Model with Rational Transfer Function Representation.☆78Updated 11 months ago
- Custom triton kernels for training Karpathy's nanoGPT.☆18Updated 6 months ago
- Collection of autoregressive model implementation☆85Updated 2 months ago
- ☆94Updated 3 months ago