tobiaskatsch / GatedLinearRNN
☆27Updated last year
Alternatives and similar repositories for GatedLinearRNN:
Users that are interested in GatedLinearRNN are comparing it to the libraries listed below
- Implementation of Spectral State Space Models☆16Updated last year
- ☆52Updated 6 months ago
- Codes accompanying the paper "LaProp: a Better Way to Combine Momentum with Adaptive Gradient"☆28Updated 4 years ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆87Updated 9 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- RWKV model implementation☆37Updated last year
- sigma-MoE layer☆18Updated last year
- GoldFinch and other hybrid transformer components☆45Updated 8 months ago
- research impl of Native Sparse Attention (2502.11089)☆53Updated last month
- Combining SOAP and MUON☆14Updated 2 months ago
- Here we will test various linear attention designs.☆60Updated 11 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 11 months ago
- Latent Diffusion Language Models☆68Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆60Updated 3 years ago
- ☆19Updated 3 weeks ago
- Parallel Associative Scan for Language Models☆18Updated last year
- ☆32Updated last year
- ☆43Updated last year
- ☆51Updated 10 months ago
- Using FlexAttention to compute attention with different masking patterns☆43Updated 6 months ago
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- ☆31Updated last year
- ☆53Updated last year
- ☆33Updated 7 months ago
- ☆39Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 4 months ago
- ☆27Updated 9 months ago
- Code for "Accelerating Training with Neuron Interaction and Nowcasting Networks" [to appear at ICLR 2025]☆18Updated last month
- A MAD laboratory to improve AI architecture designs 🧪☆109Updated 4 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year