An implementation of the efficient attention module.
☆329Nov 30, 2020Updated 5 years ago
Alternatives and similar repositories for efficient-attention
Users that are interested in efficient-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Transformer are RNNs: Fast Autoregressive Transformer with Linear Attention☆24Jan 7, 2021Updated 5 years ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆824May 5, 2024Updated last year
- Attention mechanism☆52Sep 13, 2021Updated 4 years ago
- Pytorch library for fast transformer implementations☆1,767Mar 23, 2023Updated 3 years ago
- list of efficient attention modules☆1,022Aug 23, 2021Updated 4 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,175Feb 2, 2022Updated 4 years ago
- [ICLR'22 Oral] Implementation of "CycleMLP: A MLP-like Architecture for Dense Prediction"☆292Apr 25, 2022Updated 3 years ago
- ☆13Nov 7, 2021Updated 4 years ago
- [ICME 2022] code for the paper, SimVit: Exploring a simple vision transformer with sliding windows.☆67Oct 11, 2022Updated 3 years ago
- ☆196Feb 14, 2023Updated 3 years ago
- ☆10Dec 13, 2022Updated 3 years ago
- ☆110Sep 15, 2021Updated 4 years ago
- FairSeq repo with Apollo optimizer☆113Dec 20, 2023Updated 2 years ago
- Local Attention - Flax module for Jax☆22May 26, 2021Updated 4 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- The code for Joint Neural Architecture Search and Quantization☆14Apr 10, 2019Updated 7 years ago
- MLP-Like Vision Permutator for Visual Recognition (PyTorch)☆192Mar 31, 2022Updated 4 years ago
- [ICCV 2023] You Only Look at One Partial Sequence☆343Oct 21, 2023Updated 2 years ago
- ☆249Mar 16, 2022Updated 4 years ago
- My take on a practical implementation of Linformer for Pytorch.☆423Jul 27, 2022Updated 3 years ago
- Official code Cross-Covariance Image Transformer (XCiT)☆676Sep 28, 2021Updated 4 years ago
- [NeurIPS2023]Lightweight Vision Transformer with Bidirectional Interaction☆27Oct 27, 2023Updated 2 years ago
- Implementation of the paper ''Implicit Feature Refinement for Instance Segmentation''.☆20Oct 27, 2021Updated 4 years ago
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆199Dec 2, 2022Updated 3 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Representative Graph Neural Network☆35Aug 12, 2020Updated 5 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆134May 29, 2024Updated last year
- ☆14Nov 20, 2022Updated 3 years ago
- [CVPR-2022 (oral)]-Video K-Net: A Simple, Strong, and Unified Baseline for Video Segmentation☆155Aug 19, 2023Updated 2 years ago
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformers☆25Feb 26, 2025Updated last year
- Implementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute☆1,531Nov 18, 2020Updated 5 years ago
- Directed masked autoencoders☆14Mar 25, 2026Updated 2 weeks ago
- ☆74Dec 8, 2022Updated 3 years ago
- 음성인식과 신호처리☆14Sep 12, 2021Updated 4 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- ☆22Aug 1, 2018Updated 7 years ago
- PyTorch implementation of Non-Local Neural Networks (https://arxiv.org/pdf/1711.07971.pdf)☆253Feb 13, 2023Updated 3 years ago
- ☆11Oct 3, 2021Updated 4 years ago
- ☆33Oct 9, 2022Updated 3 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆75Jun 23, 2020Updated 5 years ago
- Pytorch implementation of Performer from the paper "Rethinking Attention with Performers".☆25Oct 5, 2020Updated 5 years ago
- A Pytorch implementation of Global Self-Attention Network, a fully-attention backbone for vision tasks☆94Nov 21, 2020Updated 5 years ago