lucidrains / coordinate-descent-attention
Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk
☆46Updated last year
Related projects: ⓘ
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆56Updated 10 months ago
- ☆29Updated this week
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆43Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆85Updated last month
- Pytorch implementation of a simple way to enable (Stochastic) Frame Averaging for any network☆45Updated last month
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆31Updated 2 years ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆101Updated last year
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆53Updated 4 months ago
- ☆23Updated this week
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆52Updated last year
- Implementation of Infini-Transformer in Pytorch☆100Updated last month
- Implementation of Agent Attention in Pytorch☆83Updated 2 months ago
- ☆42Updated this week
- Implementation of a holodeck, written in Pytorch☆17Updated 10 months ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆40Updated 7 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆50Updated 10 months ago
- Experimental scripts for researching data adaptive learning rate scheduling.☆23Updated 11 months ago
- ☆48Updated 3 months ago
- ☆29Updated last year
- Experiment with diffusion models that you can run on your local jupyter instances☆52Updated last month
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆47Updated 2 years ago
- Self contained pytorch implementation of a sinkhorn based router, for mixture of experts or otherwise☆31Updated 3 weeks ago
- Source-to-Source Debuggable Derivatives in Pure Python☆14Updated 7 months ago
- Contrastive Language-Image Pretraining☆37Updated 2 months ago
- Implementation of Discrete Key / Value Bottleneck, in Pytorch☆87Updated last year
- ResiDual: Transformer with Dual Residual Connections, https://arxiv.org/abs/2304.14802☆87Updated last year
- My own attempt at a long context genomics model, leveraging recent advances in long context attention modeling (Flash Attention + other h…☆51Updated last year
- A Transformer made of Rotation-equivariant Attention using Vector Neurons☆80Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆86Updated 3 months ago
- A scalable implementation of diffusion and flow-matching with XGBoost models, applied to calorimeter data.☆13Updated 2 weeks ago