NX-AI / mlstm_kernelsLinks
Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.
☆56Updated 2 weeks ago
Alternatives and similar repositories for mlstm_kernels
Users that are interested in mlstm_kernels are comparing it to the libraries listed below
Sorting:
- FlashRNN - Fast RNN Kernels with I/O Awareness☆90Updated 2 months ago
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆83Updated 3 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆88Updated 11 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆52Updated 2 months ago
- DPO, but faster 🚀☆42Updated 5 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆106Updated 6 months ago
- ☆31Updated last month
- Implementation of a Light Recurrent Unit in Pytorch☆47Updated 7 months ago
- ☆80Updated last year
- A State-Space Model with Rational Transfer Function Representation.☆78Updated last year
- supporting pytorch FSDP for optimizers☆79Updated 5 months ago
- research impl of Native Sparse Attention (2502.11089)☆54Updated 3 months ago
- ☆56Updated 2 months ago
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆124Updated 2 weeks ago
- Accelerated First Order Parallel Associative Scan☆181Updated 9 months ago
- GoldFinch and other hybrid transformer components☆45Updated 10 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆66Updated last month
- ☆61Updated 6 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆115Updated last week
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆167Updated 2 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆63Updated last year
- Work in progress.☆67Updated this week
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆37Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆124Updated 9 months ago
- ☆53Updated 8 months ago
- Combining SOAP and MUON☆16Updated 3 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆119Updated 7 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆81Updated last year
- A large-scale RWKV v6, v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to de…☆35Updated last week