Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification
☆134Apr 1, 2021Updated 5 years ago
Alternatives and similar repositories for STAM-pytorch
Users that are interested in STAM-pytorch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official implementation of "An Image is Worth 16x16 Words, What is a Video Worth?" (2021 paper)☆221Aug 23, 2022Updated 3 years ago
- Implementation of ViViT: A Video Vision Transformer☆558Jun 21, 2021Updated 4 years ago
- Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification☆729Aug 25, 2021Updated 4 years ago
- [NeurIPS 2021] Space-time Mixing Attention for Video Transformer☆18Mar 18, 2022Updated 4 years ago
- Implementation of Kronecker Attention in Pytorch☆20Sep 12, 2020Updated 5 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆199Mar 24, 2021Updated 5 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Jan 27, 2022Updated 4 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆309Dec 27, 2021Updated 4 years ago
- PyTorch implementation of a collections of scalable Video Transformer Benchmarks.☆308May 4, 2022Updated 4 years ago
- A better PyTorch implementation of image local attention which reduces the GPU memory by an order of magnitude.☆141Dec 21, 2021Updated 4 years ago
- An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates concepts from neural fields, top-down-bottom-up proc…☆196Mar 27, 2021Updated 5 years ago
- The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"☆1,849Apr 9, 2024Updated 2 years ago
- Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-Equivariant Graph Neural Network☆226Jun 2, 2024Updated last year
- Another attempt at a long-context / efficient transformer by me☆38Apr 11, 2022Updated 4 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Implementation of Uformer, Attention-based Unet, in Pytorch☆96Oct 26, 2021Updated 4 years ago
- Pytorch reimplementation of Molecule Attention Transformer, which uses a transformer to tackle the graph-like structure of molecules☆58Dec 2, 2020Updated 5 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Apr 6, 2022Updated 4 years ago
- Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch☆97Feb 19, 2021Updated 5 years ago
- 🔍 Codebase for the ICML '20 paper "Ready Policy One: World Building Through Active Learning" (arxiv: 2002.02693)☆18Jul 6, 2023Updated 2 years ago
- Local Attention - Flax module for Jax☆22May 26, 2021Updated 4 years ago
- A GPT, made only of MLPs, in Jax☆59Jun 23, 2021Updated 4 years ago
- Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, …☆39Aug 3, 2021Updated 4 years ago
- Implementation of Insertion-deletion Denoising Diffusion Probabilistic Models☆30May 31, 2022Updated 3 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- This is an official implementation of video classification for our CVPR 2020 paper "Non-Local Neural Networks With Grouped Bilinear Atten…☆12Jan 30, 2021Updated 5 years ago
- Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pyt…☆77Jun 8, 2021Updated 4 years ago
- Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch☆373Dec 9, 2022Updated 3 years ago
- An attempt to merge ESBN with Transformers, to endow Transformers with the ability to emergently bind symbols☆16Aug 3, 2021Updated 4 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Mar 3, 2021Updated 5 years ago
- Implementation of Multistream Transformers in Pytorch☆54Jul 31, 2021Updated 4 years ago
- Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper☆155Apr 27, 2021Updated 5 years ago
- GPT, but made only out of MLPs☆89May 25, 2021Updated 4 years ago
- Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch☆54Mar 30, 2021Updated 5 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Code for Temporal Data Augmentations (ECCVW 2020)☆37Aug 18, 2020Updated 5 years ago
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆145Mar 24, 2025Updated last year
- Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorc…☆171Nov 25, 2022Updated 3 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆120Aug 4, 2021Updated 4 years ago
- Omniscient Video Super-Resolution☆49Oct 16, 2021Updated 4 years ago
- コンピュータビジョン研究コミュニティcvpaper.challengeのサマリ。サーベイ資料や研究成果など。☆12Jan 20, 2021Updated 5 years ago
- Graph neural network message passing reframed as a Transformer with local attention☆70Dec 24, 2022Updated 3 years ago