Malitha123 / awesome-video-self-supervised-learning
A curated list of awesome self-supervised learning methods in videos
☆134Updated last week
Alternatives and similar repositories for awesome-video-self-supervised-learning:
Users that are interested in awesome-video-self-supervised-learning are comparing it to the libraries listed below
- [CVPR'23] AdaMAE: Adaptive Masking for Efficient Spatiotemporal Learning with Masked Autoencoders☆78Updated last year
- Foundation Models for Video Understanding: A Survey☆119Updated 7 months ago
- Open source implementation of "Vision Transformers Need Registers"☆175Updated 2 weeks ago
- This repo contains the official implementation of ICLR 2024 paper "Is ImageNet worth 1 video? Learning strong image encoders from 1 long …☆87Updated 11 months ago
- [CVPR2023] Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning (https://arxiv…☆125Updated last year
- The suite of modeling video with Mamba☆263Updated 11 months ago
- ☆78Updated last year
- The official repository for ICLR2024 paper "FROSTER: Frozen CLIP is a Strong Teacher for Open-Vocabulary Action Recognition"☆78Updated 3 months ago
- [CVPR 2023] Official repository of paper titled "Fine-tuned CLIP models are efficient video learners".☆273Updated last year
- official implementation of "Interpreting CLIP's Image Representation via Text-Based Decomposition"☆206Updated 4 months ago
- (ICLR 2023) Official PyTorch implementation of "What Do Self-Supervised Vision Transformers Learn?"☆108Updated last year
- Official code for "Top-Down Visual Attention from Analysis by Synthesis" (CVPR 2023 highlight)☆165Updated last year
- [CVPR 2024] - Official code for the paper "Temporally Consistent Unbalanced Optimal Transport for Unsupervised Action Segmentation"☆36Updated 8 months ago
- [Survey] Masked Modeling for Self-supervised Representation Learning on Vision and Beyond (https://arxiv.org/abs/2401.00897)☆324Updated 6 months ago
- Official Open Source code for "Masked Autoencoders As Spatiotemporal Learners"☆336Updated 4 months ago
- Official repository for "Video-FocalNets: Spatio-Temporal Focal Modulation for Video Action Recognition" [ICCV 2023]☆97Updated 11 months ago
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆97Updated 9 months ago
- Official repository for "Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting" [CVPR 2023]☆116Updated last year
- Official implementation of "Test-Time Zero-Shot Temporal Action Localization", CVPR 2024☆54Updated 7 months ago
- Official Implementation of the CrossMAE paper: Rethinking Patch Dependence for Masked Autoencoders☆107Updated 2 weeks ago
- Official PyTorch repository for GRAM☆57Updated last week
- The official implementation of CMAE https://arxiv.org/abs/2207.13532 and https://ieeexplore.ieee.org/document/10330745☆99Updated last year
- [CVPR24] Official Implementation of GEM (Grounding Everything Module)☆117Updated 2 weeks ago
- [ECCV 2024] PyTorch implementation of CropMAE, introduced in "Efficient Image Pre-Training with Siamese Cropped Masked Autoencoders"☆57Updated last month
- Awesome papers & datasets specifically focused on long-term videos.☆267Updated 5 months ago
- [T-PAMI] A curated list of self-supervised multimodal learning resources.☆252Updated 8 months ago
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆60Updated 11 months ago
- ☆39Updated 10 months ago
- Official repository for "Self-Supervised Video Transformer" (CVPR'22)☆106Updated 9 months ago
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆127Updated last month