wengzejia1 / Open-VCLIPView external linksLinks
☆120Feb 19, 2024Updated last year
Alternatives and similar repositories for Open-VCLIP
Users that are interested in Open-VCLIP are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] FROSTER: Frozen CLIP is a Strong Teacher for Open-Vocabulary Action Recognition☆97Jan 14, 2025Updated last year
- This repository contains the Adverbs in Recipes (AIR) dataset and the code published at the CVPR 23 paper: "Learning Action Changes by Me…☆13May 25, 2023Updated 2 years ago
- MAtch, eXpand and Improve: Unsupervised Finetuning for Zero-Shot Action Recognition with Language Knowledge (ICCV 2023)☆30Sep 5, 2023Updated 2 years ago
- [CVPR 2023] Official repository of paper titled "Fine-tuned CLIP models are efficient video learners".☆305Apr 3, 2024Updated last year
- Official Implementation for ACM MM2024 paper "VrdONE: One-stage Video Visual Relation Detection".☆11Nov 13, 2024Updated last year
- [ECCV 2024] ZeroI2V: Zero-Cost Adaptation of Pre-trained Transformers from Image to Video☆20Jul 29, 2024Updated last year
- Official PyTorch implementation of the paper "Revisiting Temporal Modeling for CLIP-based Image-to-Video Knowledge Transferring"☆107Jan 28, 2024Updated 2 years ago
- Accepted at ICCV '23☆15Oct 4, 2023Updated 2 years ago
- Large-Vocabulary Video Instance Segmentation dataset☆96Jul 5, 2024Updated last year
- [ICCV2023 Oral] Implicit Temporal Modeling with Learnable Alignment for Video Recognition☆41Nov 29, 2023Updated 2 years ago
- Official implementation of "Test-Time Zero-Shot Temporal Action Localization", CVPR 2024☆69Sep 11, 2024Updated last year
- ICCV2023: Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer Learning☆41Sep 25, 2023Updated 2 years ago
- [AAAI'25]: Building a Multi-modal Spatiotemporal Expert for Zero-shot Action Recognition with CLIP☆19Aug 5, 2025Updated 6 months ago
- ☆42Apr 7, 2024Updated last year
- 【AAAI'2023 & IJCV】Transferring Vision-Language Models for Visual Recognition: A Classifier Perspective☆199May 30, 2024Updated last year
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆74Jan 20, 2025Updated last year
- [ECCV 2024 Oral] ActionVOS: Actions as Prompts for Video Object Segmentation☆31Dec 4, 2024Updated last year
- This is the official implementation of Elaborative Rehearsal for Zero-shot Action Recognition (ICCV2021)☆36Apr 9, 2022Updated 3 years ago
- Composed Video Retrieval☆62May 2, 2024Updated last year
- 【CVPRW'23】First Place Solution to the CVPR'2023 AQTC Challenge☆15Jul 18, 2023Updated 2 years ago
- Code of the Grounded MUIE model, REAMO☆11Dec 3, 2024Updated last year
- Code for our IJCV 2023 paper "CLIP-guided Prototype Modulating for Few-shot Action Recognition".☆77Mar 7, 2024Updated last year
- 【CVPR'2023】Bidirectional Cross-Modal Knowledge Exploration for Video Recognition with Pre-trained Vision-Language Models☆155Sep 9, 2024Updated last year
- ☆18Feb 20, 2025Updated 11 months ago
- 【CVPR'24】OST: Refining Text Knowledge with Optimal Spatio-Temporal Descriptor for General Video Recognition☆38Apr 27, 2024Updated last year
- Code release for "Language-conditioned Detection Transformer"☆88Jun 17, 2024Updated last year
- [ICLR'23] AIM: Adapting Image Models for Efficient Video Action Recognition☆300Sep 17, 2023Updated 2 years ago
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆73Feb 3, 2025Updated last year
- [ICCV 2023 CLVL Workshop] Zero-Shot and Few-Shot Video Question Answering with Multi-Modal Prompts☆14Jan 13, 2025Updated last year
- Video + CLIP Baseline for Ego4D Long Term Action Anticipation Challenge (CVPR 2022)☆15Jul 4, 2022Updated 3 years ago
- ☆80Nov 24, 2024Updated last year
- ☆21May 11, 2025Updated 9 months ago
- 【ICCV'2023】What Can Simple Arithmetic Operations Do for Temporal Modeling?☆73Jan 26, 2024Updated 2 years ago
- ☆62Jun 16, 2023Updated 2 years ago
- BEAR: a new BEnchmark on video Action Recognition☆46Apr 21, 2024Updated last year
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆65Jun 28, 2024Updated last year
- [ECCV 2024] OpenPSG: Open-set Panoptic Scene Graph Generation via Large Multimodal Models☆49Jan 8, 2025Updated last year
- Official PyTorch code of GroundVQA (CVPR'24)☆64Sep 13, 2024Updated last year
- Code and data setup for the paper "Are Diffusion Models Vision-and-language Reasoners?"☆33Mar 15, 2024Updated last year