facebookresearch / TimeSformerLinks
The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"
☆1,823Updated last year
Alternatives and similar repositories for TimeSformer
Users that are interested in TimeSformer are comparing it to the libraries listed below
Sorting:
- This is an official implementation for "Video Swin Transformers".☆1,629Updated 2 years ago
- ☆932Updated last year
- Implementation of ViViT: A Video Vision Transformer☆556Updated 4 years ago
- Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification☆727Updated 4 years ago
- [NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training☆1,674Updated 2 years ago
- [ICLR2022] official implementation of UniFormer☆896Updated last year
- PyTorch implementation of a collections of scalable Video Transformer Benchmarks.☆305Updated 3 years ago
- Scenic: A Jax Library for Computer Vision Research and Beyond☆3,762Updated this week
- [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking☆748Updated last year
- Code Release for MViTv2 on Image Recognition.☆450Updated last year
- [ICCV 2019] TSM: Temporal Shift Module for Efficient Video Understanding☆2,180Updated last year
- Video Swin Transformer - PyTorch☆265Updated 4 years ago
- Code release for ActionFormer (ECCV 2022)☆537Updated last year
- ☆1,035Updated 5 years ago
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,315Updated 4 years ago
- Code release for ConvNeXt V2 model☆1,959Updated last year
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)☆1,365Updated last year
- Recent Transformer-based CV and related works.☆1,339Updated 2 years ago
- Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)☆2,120Updated 3 years ago
- This is a collection of our NAS and Vision Transformer work.☆1,826Updated last year
- Explainability for Vision Transformers☆1,063Updated 3 years ago
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".☆1,022Updated 3 years ago
- This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"☆602Updated 2 years ago
- Official implementation of PVT series☆1,882Updated 3 years ago
- ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet☆1,191Updated 2 years ago
- Extract video features from raw videos using multiple GPUs. We support RAFT flow frames as well as S3D, I3D, R(2+1)D, VGGish, CLIP, and T…☆644Updated last week
- VideoX: a collection of video cross-modal models☆1,061Updated last year
- [CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize …☆1,977Updated 2 years ago
- OpenMMLab optical flow toolbox and benchmark☆1,046Updated last year
- Official DeiT repository☆4,323Updated last year