HarryHsing / EchoInkLinks
EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Visual Reasoning with Qwen2.5-Omni]
☆60Updated 5 months ago
Alternatives and similar repositories for EchoInk
Users that are interested in EchoInk are comparing it to the libraries listed below
Sorting:
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆60Updated last month
- (NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Align…☆107Updated last month
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆74Updated 7 months ago
- A project for tri-modal LLM benchmarking and instruction tuning.☆48Updated 7 months ago
- ☆35Updated 2 months ago
- ☆21Updated 9 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆31Updated last month
- [CVPR 2025] Crab: A Unified Audio-Visual Scene Understanding Model with Explicit Cooperation☆73Updated 4 months ago
- ☆19Updated last year
- [ECCV’24] Official Implementation for CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenario…☆57Updated last year
- a fully open-source implementation of a GPT-4o-like speech-to-speech video understanding model.☆27Updated 6 months ago
- Video Chain of Thought, Codes for ICML 2024 paper: "Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition"☆168Updated 8 months ago
- ☆34Updated 5 months ago
- LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long Videos. (CVPR 2025))☆51Updated 4 months ago
- ☆17Updated 3 months ago
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆22Updated last year
- LMM solved catastrophic forgetting, AAAI2025☆44Updated 6 months ago
- ☆33Updated 3 months ago
- HallE-Control: Controlling Object Hallucination in LMMs☆31Updated last year
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆74Updated 3 months ago
- ☆24Updated 6 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆57Updated last year
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆30Updated 10 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆118Updated last year
- DeepDubber-V1: Towards High Quality and Dialogue, Narration, Monologue Adaptive Movie Dubbing Via Multi-Modal Chain-of-Thoughts Reasoning…☆25Updated last month
- [ACM-MM 2025 Workshop] More Is Better: A MoE-Based Emotion Recognition Framework with Human Preference Alignment.☆24Updated last month
- A list of current Audio-Vision Multimodal with awesome resources (paper, application, data, review, survey, etc.).☆27Updated 2 years ago
- The code and weight for LoVA. LoVA is a novel model for Long-form Video-to-Audio generation. Based on the Diffusion Transformer (DiT) arc…☆15Updated 8 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆72Updated 5 months ago
- ☆43Updated 5 months ago