tobiaskatsch / GatedLinearRNN
☆27Updated 11 months ago
Alternatives and similar repositories for GatedLinearRNN:
Users that are interested in GatedLinearRNN are comparing it to the libraries listed below
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆87Updated 8 months ago
- RWKV model implementation☆37Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 9 months ago
- Implementation of Spectral State Space Models☆16Updated 11 months ago
- ☆27Updated 7 months ago
- ☆52Updated 4 months ago
- Latent Diffusion Language Models☆68Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 2 months ago
- Codes accompanying the paper "LaProp: a Better Way to Combine Momentum with Adaptive Gradient"☆27Updated 4 years ago
- ☆33Updated 5 months ago
- ☆21Updated 3 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆103Updated 2 months ago
- sigma-MoE layer☆18Updated last year
- GoldFinch and other hybrid transformer components☆43Updated 7 months ago
- [Oral; Neurips OPT2024 ] μLO: Compute-Efficient Meta-Generalization of Learned Optimizers☆11Updated 2 months ago
- ☆53Updated last year
- Triton Implementation of HyperAttention Algorithm☆46Updated last year
- ☆42Updated last year
- ☆51Updated 9 months ago
- Here we will test various linear attention designs.☆58Updated 9 months ago
- Parallel Associative Scan for Language Models☆18Updated last year
- Automatically take good care of your preemptible TPUs☆36Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated 2 weeks ago
- ☆31Updated 10 months ago
- Experiment of using Tangent to autodiff triton☆75Updated last year
- supporting pytorch FSDP for optimizers☆76Updated 2 months ago
- The official code of "Building on Efficient Foundations: Effectively Training LLMs with Structured Feedforward Layers"☆17Updated 6 months ago
- ☆21Updated 2 months ago
- Using FlexAttention to compute attention with different masking patterns☆40Updated 4 months ago