lucidrains / deep-cross-attentionLinks
Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorch
☆94Updated 8 months ago
Alternatives and similar repositories for deep-cross-attention
Users that are interested in deep-cross-attention are comparing it to the libraries listed below
Sorting:
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆91Updated 4 months ago
- Implementation of the proposed MaskBit from Bytedance AI☆82Updated 11 months ago
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Ro…☆47Updated last month
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆66Updated 2 months ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆132Updated last week
- Implementation of a multimodal diffusion transformer in Pytorch☆106Updated last year
- Explorations into improving ViTArc with Slot Attention☆43Updated last year
- Implementation of Agent Attention in Pytorch☆91Updated last year
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆38Updated 8 months ago
- Implementation of the dynamic chunking mechanism in H-net by Hwang et al. of Carnegie Mellon☆65Updated 2 months ago
- Implementation of a Light Recurrent Unit in Pytorch☆49Updated last year
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆136Updated last week
- Implementation of the proposed LVMAE, from the paper, Extending Video Masked Autoencoders to 128 frames, in Pytorch☆54Updated 11 months ago
- A practical implementation of GradNorm, Gradient Normalization for Adaptive Loss Balancing, in Pytorch☆110Updated 2 months ago
- Autoregressive Image Generation☆32Updated 4 months ago
- Implementation of TiTok, proposed by Bytedance in "An Image is Worth 32 Tokens for Reconstruction and Generation"☆181Updated last year
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆119Updated last year
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆53Updated 7 months ago
- Triton implement of bi-directional (non-causal) linear attention☆56Updated 8 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆131Updated last month
- Implementation of the proposed Spline-Based Transformer from Disney Research☆104Updated 11 months ago
- Implementation of a modular, high-performance, and simplistic mamba for high-speed applications☆36Updated 11 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated last year
- an implementation of FAdam (Fisher Adam) in PyTorch☆50Updated 3 months ago
- [ICLR 2025 Spotlight] Official Implementation for ToST (Token Statistics Transformer)☆120Updated 8 months ago
- Implementation of Infini-Transformer in Pytorch☆113Updated 9 months ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆57Updated last year
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆72Updated last week
- The open source implementation of the model from "Scaling Vision Transformers to 22 Billion Parameters"☆29Updated this week
- Official PyTorch Implementation for Paper "No More Adam: Learning Rate Scaling at Initialization is All You Need"☆54Updated 8 months ago