tobiaskatsch / GatedLinearRNNLinks
☆29Updated last year
Alternatives and similar repositories for GatedLinearRNN
Users that are interested in GatedLinearRNN are comparing it to the libraries listed below
Sorting:
- Implementation of Spectral State Space Models☆16Updated last year
- Codes accompanying the paper "LaProp: a Better Way to Combine Momentum with Adaptive Gradient"☆29Updated 5 years ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 7 months ago
- ☆34Updated last year
- ☆35Updated last year
- RWKV model implementation☆38Updated 2 years ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆91Updated last year
- ☆19Updated last month
- Griffin MQA + Hawk Linear RNN Hybrid☆88Updated last year
- research impl of Native Sparse Attention (2502.11089)☆63Updated 11 months ago
- Latent Diffusion Language Models☆70Updated 2 years ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated last year
- ☆32Updated 2 years ago
- ☆27Updated last year
- GoldFinch and other hybrid transformer components☆45Updated last year
- H-Net Dynamic Hierarchical Architecture☆81Updated 4 months ago
- ☆53Updated 2 years ago
- Code for the paper "Function-Space Learning Rates"☆23Updated 7 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 7 months ago
- ☆62Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Updated 3 years ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆82Updated last month
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- [Oral; Neurips OPT2024 ] μLO: Compute-Efficient Meta-Generalization of Learned Optimizers☆14Updated 10 months ago
- Utilities for Training Very Large Models☆58Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- ☆82Updated last year