davidsvy / cosformer-pytorch
Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".
☆43Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for cosformer-pytorch
- [ICLR 2023] “ Layer Grafted Pre-training: Bridging Contrastive Learning And Masked Image Modeling For Better Representations”, Ziyu Jian…☆23Updated last year
- code for Explicit Sparse Transformer☆56Updated last year
- custom pytorch implementation of MoCo v3☆44Updated 3 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆32Updated last year
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆116Updated 3 years ago
- PyTorch implementation of Pay Attention to MLPs☆39Updated 3 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆106Updated 4 years ago
- Official Pytorch Implementation for "Continual Transformers: Redundancy-Free Attention for Online Inference" [ICLR 2023]☆28Updated last year
- Project page for paper Self-supervised Representation Learning with Relative Predictive Coding☆17Updated 3 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆27Updated 4 years ago
- Implementation of Multistream Transformers in Pytorch☆53Updated 3 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 4 years ago
- ☆16Updated last year
- ☆31Updated 3 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 4 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆59Updated 2 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆87Updated 3 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 4 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆55Updated 3 years ago
- Pytorch implementation of Performer from the paper "Rethinking Attention with Performers".☆23Updated 4 years ago
- Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch☆51Updated 3 years ago
- ☆56Updated 3 years ago
- Code for ACL 2023 Oral Paper: ManagerTower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning☆11Updated 11 months ago
- This is a offical PyTorch/GPU implementation of SupMAE.☆77Updated 2 years ago
- Code and data to accompany the camera-ready version of "Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Tra…☆27Updated 3 years ago
- Sparse Attention with Linear Units☆17Updated 3 years ago
- Code implementation for paper "On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals".☆16Updated 2 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆46Updated 4 years ago
- This repository contains source codes for SoftCTC. Original paper can be found here: https://arxiv.org/abs/2212.02135☆19Updated last year