Yui010206 / CREMALinks
[ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion
☆55Updated 6 months ago
Alternatives and similar repositories for CREMA
Users that are interested in CREMA are comparing it to the libraries listed below
Sorting:
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆31Updated last year
- ☆58Updated last year
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆34Updated last month
- ☆37Updated 4 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆76Updated last year
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆64Updated 5 months ago
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆51Updated 5 months ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆40Updated 9 months ago
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆45Updated 2 years ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆33Updated last year
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆57Updated 2 years ago
- ☆29Updated 5 months ago
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]☆98Updated 5 months ago
- [ICLR 2023] CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding☆46Updated 7 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆48Updated 4 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆153Updated 3 months ago
- ☆36Updated last year
- Video-Panda: Parameter-efficient Alignment for Encoder-free Video-Language Models [CVPR 2025]☆76Updated 6 months ago
- Official Repository of Personalized Visual Instruct Tuning☆33Updated 10 months ago
- [CVPR 2024] ViT-Lens: Towards Omni-modal Representations☆188Updated 11 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆77Updated 2 weeks ago
- Quick Long Video Understanding☆72Updated 2 months ago
- LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long Videos. (CVPR 2025))☆54Updated 7 months ago
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆44Updated 6 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆85Updated 5 months ago
- Code for our ACL 2025 paper "Language Repository for Long Video Understanding"☆33Updated last year
- ☆33Updated 8 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆62Updated 10 months ago
- ☆40Updated 4 months ago
- ☆62Updated 4 months ago