tobiaskatsch / GatedLinearRNNLinks
☆28Updated last year
Alternatives and similar repositories for GatedLinearRNN
Users that are interested in GatedLinearRNN are comparing it to the libraries listed below
Sorting:
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 3 months ago
- Implementation of Spectral State Space Models☆16Updated last year
- Codes accompanying the paper "LaProp: a Better Way to Combine Momentum with Adaptive Gradient"☆29Updated 5 years ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆90Updated last year
- RWKV model implementation☆38Updated 2 years ago
- ☆34Updated last year
- ☆57Updated 11 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆89Updated last year
- Latent Diffusion Language Models☆69Updated 2 years ago
- ☆19Updated 4 months ago
- ☆32Updated last year
- ☆34Updated last year
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated 8 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- ☆53Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- sigma-MoE layer☆20Updated last year
- Combining SOAP and MUON☆16Updated 7 months ago
- Multi-framework implementation of Deep Kernel Shaping and Tailored Activation Transformations, which are methods that modify neural netwo…☆72Updated 2 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆129Updated 9 months ago
- Code implementing "Efficient Parallelization of a Ubiquitious Sequential Computation" (Heinsen, 2023)☆94Updated 9 months ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- ☆53Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆19Updated last month
- Experiment of using Tangent to autodiff triton☆81Updated last year
- Here we will test various linear attention designs.☆62Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆59Updated 3 years ago
- ☆102Updated last month
- RWKV-7: Surpassing GPT☆95Updated 10 months ago