mlvlab / OVQA
Open-Vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models (ICCV 2023)
☆18Updated last year
Alternatives and similar repositories for OVQA
Users that are interested in OVQA are comparing it to the libraries listed below
Sorting:
- Official Implementation (Pytorch) of the "VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Capti…☆18Updated 3 months ago
- Official implementation of CVPR 2024 paper "vid-TLDR: Training Free Token merging for Light-weight Video Transformer".☆47Updated last year
- Large Language Models are Temporal and Causal Reasoners for Video Question Answering (EMNLP 2023)☆74Updated last month
- Official PyTorch code of GroundVQA (CVPR'24)☆60Updated 8 months ago
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated last month
- ☆10Updated last month
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)☆38Updated last year
- Official implementation of CVPR 2024 paper "Retrieval-Augmented Open-Vocabulary Object Detection".☆37Updated 8 months ago
- Official implementation of CVPR 2024 paper "Prompt Learning via Meta-Regularization".☆27Updated 2 months ago
- Video-Text Representation Learning via Differentiable Weak Temporal Alignment (CVPR 2022)☆16Updated last year
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 5 months ago
- [WACV 2025] Official Pytorch code for "Background-aware Moment Detection for Video Moment Retrieval"☆13Updated 2 months ago
- MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models (CVPR 2023)☆33Updated last year
- [CVPR 2024] Do you remember? Dense Video Captioning with Cross-Modal Memory Retrieval☆55Updated 10 months ago
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆49Updated last month
- ☆22Updated 2 years ago
- [ICML2024] Repo for the paper `Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models'☆20Updated 4 months ago
- NegCLIP.☆31Updated 2 years ago
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆32Updated 2 years ago
- Code and data setup for the paper "Are Diffusion Models Vision-and-language Reasoners?"☆32Updated last year
- [CVPR 2024] Official PyTorch implementation of the paper "One For All: Video Conversation is Feasible Without Video Instruction Tuning"☆32Updated last year
- [AAAI2023] Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA Task (Oral)☆39Updated last year
- Winner solution to Generic Event Boundary Captioning task in LOVEU Challenge (CVPR 2023 workshop)☆29Updated last year
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlight☆37Updated last year
- [EMNLP'22] Weakly-Supervised Temporal Article Grounding☆14Updated last year
- ☆15Updated last year
- ☆23Updated 2 years ago
- [EMNLP 2022] Official Pytorch code for "Modal-specific Pseudo Query Generation for Video Corpus Moment Retrieval"☆10Updated 11 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆75Updated 6 months ago
- Language Repository for Long Video Understanding☆31Updated 10 months ago