zhangbaijin / From-Redundancy-to-RelevanceLinks
[NAACL 2025 Oral] 🎉 From redundancy to relevance: Enhancing explainability in multimodal large language models
☆112Updated 6 months ago
Alternatives and similar repositories for From-Redundancy-to-Relevance
Users that are interested in From-Redundancy-to-Relevance are comparing it to the libraries listed below
Sorting:
- 🚀 [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2…☆85Updated 2 months ago
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generator☆113Updated 5 months ago
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Models☆96Updated last year
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Models☆178Updated 9 months ago
- [ICCV 2025] Boosting MLLM Reasoning with Text-Debiased Hint-GRPO☆31Updated last month
- Linguistic-Aware Patch Slimming Framework for Fine-grained Cross-Modal Alignment, CVPR, 2024☆98Updated 2 months ago
- Multi-granularity Correspondence Learning from Long-term Noisy Videos [ICLR 2024, Oral]☆116Updated last year
- [NAACL 2025] SIUO: Cross-Modality Safety Alignment☆112Updated 6 months ago
- [ECCV 2024] Efficient Inference of Vision Instruction-Following Models with Elastic Cache☆42Updated last year
- [ACL 2023 Findings] FACTUAL dataset, the textual scene graph parser trained on FACTUAL.☆115Updated 2 months ago
- [ICCV 2025] SparseMM: Head Sparsity Emerges from Visual Concept Responses in MLLMs☆69Updated last month
- (ICCV 2025) Enhance CLIP and MLLM's fine-grained visual representations with generative models.☆70Updated 2 months ago
- A collection of multimodal reasoning papers, codes, datasets, benchmarks and resources.☆288Updated last week
- [ICML 2025] Official repository for paper "Scaling Video-Language Models to 10K Frames via Hierarchical Differential Distillation"☆175Updated 3 months ago
- An open-source implementation for training LLaVA-NeXT.☆417Updated 10 months ago
- [ICLR'24] Democratizing Fine-grained Visual Recognition with Large Language Models☆182Updated last year
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Models☆148Updated 8 months ago
- A collection of token reduction (token pruning, merging, clustering, etc.) techniques for ML/AI☆127Updated 2 weeks ago
- Your efficient and accurate answer verification system for RL training.☆37Updated 2 months ago
- ☆68Updated 5 months ago
- [CVPR 2023] Official implementation of the paper: Fine-grained Audible Video Description☆73Updated last year
- ☆69Updated 8 months ago
- u-LLaVA: Unifying Multi-Modal Tasks via Large Language Model☆134Updated 4 months ago
- [MM'24 Oral] Prior Knowledge Integration via LLM Encoding and Pseudo Event Regulation for Video Moment Retrieval☆128Updated last year
- A Gaussian dense reward framework for GUI grounding training☆217Updated last week
- [NeurIPS'24] Leveraging Hallucinations to Reduce Manual Prompt Dependency in Promptable Segmentation☆62Updated 8 months ago
- (NeurIPS 2024) Official PyTorch implementation of LOVA3☆89Updated 5 months ago
- GPT4Vis: What Can GPT-4 Do for Zero-shot Visual Recognition?☆185Updated last year
- Official implementation for the paper "SeePhys: Does Seeing Help Thinking? -- Benchmarking Vision-Based Physics Reasoning"☆37Updated 2 weeks ago
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025☆264Updated 3 months ago