lucidrains / deep-cross-attention
Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorch
☆81Updated last month
Alternatives and similar repositories for deep-cross-attention:
Users that are interested in deep-cross-attention are comparing it to the libraries listed below
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆78Updated last month
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆102Updated 4 months ago
- Implementation of the proposed MaskBit from Bytedance AI☆75Updated 4 months ago
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆56Updated last month
- Implementation of a Light Recurrent Unit in Pytorch☆47Updated 5 months ago
- Implementation of Agent Attention in Pytorch☆90Updated 8 months ago
- Implementation of a multimodal diffusion transformer in Pytorch☆101Updated 9 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆117Updated 5 months ago
- an implementation of FAdam (Fisher Adam) in PyTorch☆43Updated 10 months ago
- Triton implement of bi-directional (non-causal) linear attention☆44Updated last month
- A big_vision inspired repo that implements a generic Auto-Encoder class capable in representation learning and generative modeling.☆34Updated 9 months ago
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆35Updated last month
- The official implementation of OmniFlow: Any-to-Any Generation with Multi-Modal Rectified Flows☆57Updated 2 weeks ago
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆76Updated last week
- Implementation of Infini-Transformer in Pytorch☆110Updated 2 months ago
- ☆261Updated last month
- Self contained pytorch implementation of a sinkhorn based router, for mixture of experts or otherwise☆33Updated 6 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆143Updated last week
- [ICLR 2025 Spotlight] Official Implementation for ToST (Token Statistics Transformer)☆77Updated last month
- Implementation of the proposed LVMAE, from the paper, Extending Video Masked Autoencoders to 128 frames, in Pytorch☆47Updated 4 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆95Updated 7 months ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆42Updated last week
- Official PyTorch Implementation for Paper "No More Adam: Learning Rate Scaling at Initialization is All You Need"☆50Updated 2 months ago
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆47Updated last month
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆121Updated 7 months ago
- Explorations into improving ViTArc with Slot Attention☆39Updated 5 months ago
- Implementation of TiTok, proposed by Bytedance in "An Image is Worth 32 Tokens for Reconstruction and Generation"☆170Updated 9 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated 11 months ago
- Implementation of the proposed Spline-Based Transformer from Disney Research☆87Updated 4 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆47Updated this week