lucidrains / bidirectional-cross-attention
A simple cross attention that updates both the source and target in one step
☆171Updated last year
Alternatives and similar repositories for bidirectional-cross-attention:
Users that are interested in bidirectional-cross-attention are comparing it to the libraries listed below
- Implementation of CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification☆199Updated 4 years ago
- Implementation of Zorro, Masked Multimodal Transformer, in Pytorch☆97Updated last year
- Official PyTorch implementation for the paper "CARD: Classification and Regression Diffusion Models"☆226Updated 2 years ago
- [T-PAMI] A curated list of self-supervised multimodal learning resources.☆252Updated 8 months ago
- Learnable Fourier Features for Multi-Dimensional Spatial Positional Encoding☆48Updated 7 months ago
- (ICLR 2023) Official PyTorch implementation of "What Do Self-Supervised Vision Transformers Learn?"☆108Updated last year
- Official Open Source code for "Masked Autoencoders As Spatiotemporal Learners"☆337Updated 5 months ago
- [NeurIPS 2023] Factorized Contrastive Learning: Going Beyond Multi-view Redundancy☆66Updated last year
- Ofiicial Implementation for Mamba-ND: Selective State Space Modeling for Multi-Dimensional Data☆59Updated 10 months ago
- [NeurIPS 2023, Spotlight] Rank-N-Contrast: Learning Continuous Representations for Regression☆111Updated last year
- MultiMAE: Multi-modal Multi-task Masked Autoencoders, ECCV 2022☆572Updated 2 years ago
- Official Implementation of the CrossMAE paper: Rethinking Patch Dependence for Masked Autoencoders☆108Updated 3 weeks ago
- Pytorch implementation of Swin MAE https://arxiv.org/abs/2212.13805☆85Updated last year
- Multimodal Masked Autoencoders (M3AE): A JAX/Flax Implementation☆103Updated 2 months ago
- A practical implementation of GradNorm, Gradient Normalization for Adaptive Loss Balancing, in Pytorch☆92Updated last year
- Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"☆338Updated 3 months ago
- This is a PyTorch implementation of “Context AutoEncoder for Self-Supervised Representation Learning"☆107Updated last year
- Official repository for "Orthogonal Projection Loss" (ICCV'21)☆121Updated 3 years ago
- Transformers w/o Attention, based fully on MLPs☆93Updated last year
- An implementation of the efficient attention module.☆310Updated 4 years ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆130Updated 3 months ago
- [TMLR 2022] High-Modality Multimodal Transformer☆115Updated 6 months ago
- Official implementation of CrossViT. https://arxiv.org/abs/2103.14899☆385Updated 3 years ago
- [NeurIPS 2022] Implementation of "AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition"☆355Updated 2 years ago
- ☆155Updated 2 years ago
- iFormer: Inception Transformer☆247Updated 2 years ago
- Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks, de…☆99Updated 3 years ago
- [CVPR'23 & TPAMI'25] Hard Patches Mining for Masked Image Modeling☆93Updated 3 weeks ago
- Simple MAE (masked autoencoders) with pytorch and pytorch-lightning.☆42Updated last year
- Recent Advances in MLP-based Models (MLP is all you need!)☆115Updated 2 years ago