muzairkhattak / ViFi-CLIP
[CVPR 2023] Official repository of paper titled "Fine-tuned CLIP models are efficient video learners".
☆275Updated last year
Alternatives and similar repositories for ViFi-CLIP:
Users that are interested in ViFi-CLIP are comparing it to the libraries listed below
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆328Updated 11 months ago
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆157Updated last year
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆257Updated last year
- ☆175Updated 2 years ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆278Updated last year
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆272Updated 10 months ago
- Foundation Models for Video Understanding: A Survey☆120Updated 8 months ago
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆187Updated last year
- Official pytorch repository for CG-DETR "Correlation-guided Query-Dependency Calibration in Video Representation Learning for Temporal Gr…☆131Updated 8 months ago
- Official repository for "Video-FocalNets: Spatio-Temporal Focal Modulation for Video Action Recognition" [ICCV 2023]☆97Updated last year
- Official repository for "Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting" [CVPR 2023]☆116Updated last year
- [CVPR2023] Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning (https://arxiv…☆126Updated last year
- ☆193Updated 2 years ago
- A curated list of awesome self-supervised learning methods in videos☆138Updated this week
- Awesome papers & datasets specifically focused on long-term videos.☆270Updated 5 months ago
- The suite of modeling video with Mamba☆263Updated 11 months ago
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding☆305Updated 9 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆319Updated 9 months ago
- 🌀 R2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding (ECCV 2024)☆82Updated 10 months ago
- [NeurIPS 2021] Moment-DETR code and QVHighlights dataset☆304Updated last year
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆289Updated 4 months ago
- Official pytorch repository for "QD-DETR : Query-Dependent Video Representation for Moment Retrieval and Highlight Detection" (CVPR 2023 …☆227Updated last year
- ☆113Updated last year
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆166Updated last year
- [CVPR2023] All in One: Exploring Unified Video-Language Pre-training☆282Updated 2 years ago
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆198Updated 4 months ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆134Updated 2 years ago
- [NIPS2023] Code and Model for VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset☆278Updated last year
- [CVPR2024] The official implementation of AdaTAD: End-to-End Temporal Action Detection with 1B Parameters Across 1000 Frames☆36Updated 10 months ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆364Updated 5 months ago