lucidrains / taylor-series-linear-attention
Explorations into the recently proposed Taylor Series Linear Attention
☆91Updated 4 months ago
Alternatives and similar repositories for taylor-series-linear-attention:
Users that are interested in taylor-series-linear-attention are comparing it to the libraries listed below
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆112Updated 3 months ago
- Implementation of Infini-Transformer in Pytorch☆107Updated 2 weeks ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆87Updated 6 months ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆53Updated 8 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆115Updated 4 months ago
- ☆33Updated 4 months ago
- Implementation of Agent Attention in Pytorch☆89Updated 6 months ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆98Updated last month
- ☆53Updated 11 months ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆44Updated 3 months ago
- supporting pytorch FSDP for optimizers☆75Updated last month
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆57Updated last year
- Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk☆46Updated last year
- Implementation of a multimodal diffusion transformer in Pytorch☆99Updated 6 months ago
- Latent Diffusion Language Models☆68Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆95Updated 3 weeks ago
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆119Updated 5 months ago
- ☆75Updated 6 months ago
- Implementation of the proposed Spline-Based Transformer from Disney Research☆85Updated 2 months ago
- ☆146Updated last month
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆88Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆121Updated 9 months ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆58Updated 3 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆51Updated last year
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆61Updated last week
- ☆37Updated 9 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 8 months ago