snumprlab / isr-dpoLinks
Official Implementation of ISR-DPO:Aligning Large Multimodal Models for Videos by Iterative Self-Retrospective DPO (AAAI'25)
☆24Updated 7 months ago
Alternatives and similar repositories for isr-dpo
Users that are interested in isr-dpo are comparing it to the libraries listed below
Sorting:
- ☆15Updated 10 months ago
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆41Updated 6 months ago
- [EMNLP 2024] Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality☆19Updated last year
- ☆24Updated 4 months ago
- [CVPR 2025] LION-FS: Fast & Slow Video-Language Thinker as Online Video Assistant☆20Updated 3 months ago
- Code and data for the paper "Emergent Visual-Semantic Hierarchies in Image-Text Representations" (ECCV 2024)☆30Updated last year
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆52Updated 3 months ago
- Egocentric Video Understanding Dataset (EVUD)☆31Updated last year
- Official repo of the ICLR 2025 paper "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"☆29Updated 2 months ago
- Video-Panda: Parameter-efficient Alignment for Encoder-free Video-Language Models [CVPR 2025]☆73Updated 3 months ago
- [ICLR 2023] CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding☆45Updated 4 months ago
- Code for "Skill-based Chain-of-Thoughts for Domain-Adaptive Video Reasoning [EMNLP 2025 Finding]"☆15Updated last month
- VisualGPTScore for visio-linguistic reasoning☆27Updated 2 years ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆73Updated last year
- Awesome Vision-Language Compositionality, a comprehensive curation of research papers in literature.☆29Updated 7 months ago
- (NeurIPS 2024 Spotlight) TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignment☆31Updated last year
- Offical repo for CAT-V - Caption Anything in Video: Object-centric Dense Video Captioning with Spatiotemporal Multimodal Prompting☆54Updated 3 months ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated last year
- Repository for the paper: Teaching VLMs to Localize Specific Objects from In-context Examples☆29Updated 10 months ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆70Updated last month
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆61Updated 6 months ago
- ☆26Updated 6 months ago
- [ECCV 2024] OpenPSG: Open-set Panoptic Scene Graph Generation via Large Multimodal Models☆49Updated 9 months ago
- ☆13Updated 9 months ago
- Action Scene Graphs for Long-Form Understanding of Egocentric Videos (CVPR 2024)☆44Updated 6 months ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆76Updated 6 months ago
- Official implementation of paper VideoLLM Knows When to Speak: Enhancing Time-Sensitive Video Comprehension with Video-Text Duet Interact…☆34Updated 8 months ago
- Official Implementation (Pytorch) of the "VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Capti…☆22Updated 8 months ago
- ☆36Updated last year
- Latest Papers, Codes and Datasets on VTG-LLMs.☆37Updated 2 weeks ago