muzairkhattak / ViFi-CLIPLinks
[CVPR 2023] Official repository of paper titled "Fine-tuned CLIP models are efficient video learners".
☆276Updated last year
Alternatives and similar repositories for ViFi-CLIP
Users that are interested in ViFi-CLIP are comparing it to the libraries listed below
Sorting:
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆328Updated last year
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆158Updated last year
- ☆176Updated 2 years ago
- Official repository for "Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting" [CVPR 2023]☆118Updated last year
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆256Updated last year
- ☆192Updated 2 years ago
- Official repository for "Video-FocalNets: Spatio-Temporal Focal Modulation for Video Action Recognition" [ICCV 2023]☆100Updated last year
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆280Updated last year
- [CVPR2023] Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning (https://arxiv…☆126Updated 2 years ago
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆276Updated 11 months ago
- Foundation Models for Video Understanding: A Survey☆123Updated 9 months ago
- ☆115Updated last year
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆185Updated last year
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆227Updated last year
- ☆79Updated 2 years ago
- ICLR 2023 DeCap: Decoding CLIP Latents for Zero-shot Captioning☆133Updated 2 years ago
- Pytorch Code for "Unified Coarse-to-Fine Alignment for Video-Text Retrieval" (ICCV 2023)☆65Updated 11 months ago
- A curated list of awesome self-supervised learning methods in videos☆140Updated last month
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆293Updated 5 months ago
- An unofficial implementation of TubeViT in "Rethinking Video ViTs: Sparse Video Tubes for Joint Image and Video Learning"☆89Updated 8 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆321Updated 10 months ago
- Official pytorch repository for CG-DETR "Correlation-guided Query-Dependency Calibration in Video Representation Learning for Temporal Gr…☆132Updated 9 months ago
- [CVPR 2022 Oral] TubeDETR: Spatio-Temporal Video Grounding with Transformers☆180Updated last year
- [AAAI 2025] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding☆102Updated 5 months ago
- 🌀 R2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding (ECCV 2024)☆83Updated 11 months ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆216Updated 2 years ago
- [CVPR2024] The official implementation of AdaTAD: End-to-End Temporal Action Detection with 1B Parameters Across 1000 Frames☆37Updated 10 months ago
- [CVPR2023] All in One: Exploring Unified Video-Language Pre-training☆282Updated 2 years ago
- Official Implementation of SnAG (CVPR 2024)☆47Updated last month
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆101Updated 4 months ago