☆182Aug 20, 2022Updated 3 years ago
Alternatives and similar repositories for efficient-video-recognition
Users that are interested in efficient-video-recognition are comparing it to the libraries listed below
Sorting:
- ☆193Oct 22, 2022Updated 3 years ago
- ICCV2023: Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer Learning☆41Sep 25, 2023Updated 2 years ago
- ☆85May 8, 2023Updated 2 years ago
- ☆49Nov 12, 2022Updated 3 years ago
- 【CVPR'2023】Bidirectional Cross-Modal Knowledge Exploration for Video Recognition with Pre-trained Vision-Language Models☆154Sep 9, 2024Updated last year
- Official code for the paper, "TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter".☆16Jun 20, 2023Updated 2 years ago
- Official PyTorch implementation of the paper "Revisiting Temporal Modeling for CLIP-based Image-to-Video Knowledge Transferring"☆107Jan 28, 2024Updated 2 years ago
- 【AAAI'2023 & IJCV】Transferring Vision-Language Models for Visual Recognition: A Classifier Perspective☆198May 30, 2024Updated last year
- ☆18Aug 23, 2022Updated 3 years ago
- VideoX: a collection of video cross-modal models☆1,060Jun 3, 2024Updated last year
- This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"☆602Dec 6, 2023Updated 2 years ago
- ☆42Apr 7, 2024Updated last year
- [CVPR 2023] Official repository of paper titled "Fine-tuned CLIP models are efficient video learners".☆303Apr 3, 2024Updated last year
- MAtch, eXpand and Improve: Unsupervised Finetuning for Zero-Shot Action Recognition with Language Knowledge (ICCV 2023)☆30Sep 5, 2023Updated 2 years ago
- [ICCV2023 Oral] Implicit Temporal Modeling with Learnable Alignment for Video Recognition☆41Nov 29, 2023Updated 2 years ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆117Sep 15, 2022Updated 3 years ago
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆346May 27, 2024Updated last year
- [ICCV2023] UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer☆341Apr 2, 2024Updated last year
- [ECCV 2022] Official Pytorch Implementation of the paper : " Zero-Shot Temporal Action Detection via Vision-Language Prompting "☆114Aug 3, 2023Updated 2 years ago
- [ICLR'23] AIM: Adapting Image Models for Efficient Video Action Recognition☆301Sep 17, 2023Updated 2 years ago
- [CVPR2022] Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos☆102Oct 30, 2022Updated 3 years ago
- Official repository for "Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting" [CVPR 2023]☆126Jul 1, 2023Updated 2 years ago
- [ECCV 2024] ZeroI2V: Zero-Cost Adaptation of Pre-trained Transformers from Image to Video☆20Jul 29, 2024Updated last year
- [NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training☆1,695Dec 8, 2023Updated 2 years ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆136May 5, 2023Updated 2 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆210Dec 18, 2022Updated 3 years ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11May 24, 2023Updated 2 years ago
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆138Aug 23, 2025Updated 6 months ago
- [AAAI'25]: Building a Multi-modal Spatiotemporal Expert for Zero-shot Action Recognition with CLIP☆19Aug 5, 2025Updated 7 months ago
- This is the official implementation of Elaborative Rehearsal for Zero-shot Action Recognition (ICCV2021)☆36Apr 9, 2022Updated 3 years ago
- ☆119Feb 19, 2024Updated 2 years ago
- Code release for "Learning Video Representations from Large Language Models"☆534Oct 1, 2023Updated 2 years ago
- [ICLR2022] official implementation of UniFormer☆898Mar 29, 2024Updated last year
- ☆109Dec 23, 2022Updated 3 years ago
- [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking☆763Oct 8, 2024Updated last year
- ☆59Jun 17, 2022Updated 3 years ago
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting☆545Sep 15, 2023Updated 2 years ago
- ☆26Aug 31, 2023Updated 2 years ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,219Dec 15, 2025Updated 3 months ago