lucidrains / coordinate-descent-attention
Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk
☆46Updated last year
Alternatives and similar repositories for coordinate-descent-attention:
Users that are interested in coordinate-descent-attention are comparing it to the libraries listed below
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆57Updated last year
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- Pytorch implementation of a simple way to enable (Stochastic) Frame Averaging for any network☆49Updated 7 months ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆45Updated last month
- Explorations into the recently proposed Taylor Series Linear Attention☆94Updated 7 months ago
- Experimental scripts for researching data adaptive learning rate scheduling.☆23Updated last year
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆31Updated 2 years ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆54Updated 2 years ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆87Updated 9 months ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆55Updated 10 months ago
- Implementation of a holodeck, written in Pytorch☆17Updated last year
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆23Updated last month
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆86Updated 3 years ago
- ☆29Updated 2 years ago
- Implementation of the proposed Spline-Based Transformer from Disney Research☆87Updated 4 months ago
- A Transformer made of Rotation-equivariant Attention using Vector Neurons☆87Updated last year
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆99Updated 2 years ago
- Implementation of Infini-Transformer in Pytorch☆109Updated 2 months ago
- Implementation of Discrete Key / Value Bottleneck, in Pytorch☆87Updated last year
- Implementation of a Light Recurrent Unit in Pytorch☆47Updated 5 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆116Updated 5 months ago
- Implementation of Insertion-deletion Denoising Diffusion Probabilistic Models☆30Updated 2 years ago
- JAX/Flax implementation of the Hyena Hierarchy☆34Updated last year
- Source-to-Source Debuggable Derivatives in Pure Python☆15Updated last year
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆101Updated 3 months ago
- Self contained pytorch implementation of a sinkhorn based router, for mixture of experts or otherwise☆32Updated 6 months ago
- My own attempt at a long context genomics model, leveraging recent advances in long context attention modeling (Flash Attention + other h…☆52Updated last year
- ☆44Updated 10 months ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago