zhangbaijin / From-Redundancy-to-Relevance
[NAACL 2025 Oral] π From redundancy to relevance: Enhancing explainability in multimodal large language models
β87Updated last month
Alternatives and similar repositories for From-Redundancy-to-Relevance:
Users that are interested in From-Redundancy-to-Relevance are comparing it to the libraries listed below
- π [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2β¦β74Updated this week
- A collection of multimodal reasoning papers, codes, datasets, benchmarks and resources.β122Updated this week
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generatorβ107Updated this week
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Modelsβ130Updated 3 months ago
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Modelsβ92Updated last year
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Modelsβ170Updated 4 months ago
- [MM'24 Oral] Prior Knowledge Integration via LLM Encoding and Pseudo Event Regulation for Video Moment Retrievalβ124Updated 7 months ago
- FACTUAL benchmark dataset, the pre-trained textual scene graph parser trained on FACTUAL.β104Updated last month
- Multi-granularity Correspondence Learning from Long-term Noisy Videos [ICLR 2024, Oral]β113Updated 11 months ago
- [ECCV 2024] Efficient Inference of Vision Instruction-Following Models with Elastic Cacheβ43Updated 8 months ago
- Liquid: Language Models are Scalable and Unified Multi-modal Generatorsβ288Updated this week
- (NeurIPS 2024) Official PyTorch implementation of LOVA3β79Updated this week
- Linguistic-Aware Patch Slimming Framework for Fine-grained Cross-Modal Alignment, CVPR, 2024β86Updated 9 months ago
- β63Updated 2 weeks ago
- [CVPR 2023] Official implementation of the paper: Fine-grained Audible Video Descriptionβ69Updated last year
- An official implementation of VideoRoPE: What Makes for Good Video Rotary Position Embedding?β112Updated 3 weeks ago
- [ICLR'24] Democratizing Fine-grained Visual Recognition with Large Language Modelsβ171Updated 8 months ago
- u-LLaVA: Unifying Multi-Modal Tasks via Large Language Modelβ130Updated 8 months ago
- An open-source implementation for training LLaVA-NeXT.β385Updated 5 months ago
- Official implementation of "Towards Efficient Visual Adaption via Structural Re-parameterization".β179Updated 11 months ago
- β¨β¨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracyβ263Updated last week
- β66Updated 3 months ago
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?β156Updated 6 months ago
- [AAAI 2025] Code for paper:Enhancing Multimodal Large Language Models Complex Reasoning via Similarity Computationβ31Updated 2 months ago
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM".β241Updated 3 months ago
- [NeurIPS'24] Leveraging Hallucinations to Reduce Manual Prompt Dependency in Promptable Segmentationβ57Updated 3 months ago
- WorldGPT: Empowering LLM as Multimodal World Modelβ115Updated 7 months ago
- Official code for "A Closer Look at Audio-Visual Segmentation"β93Updated last month
- Evaluating Vision & Language Pretraining Models with Objects, Attributes and Relations. [EMNLP 2022]β126Updated 5 months ago
- A comprehensive collection of resources focused on addressing and understanding hallucination phenomena in MLLMs.β35Updated 10 months ago