zhangbaijin / From-Redundancy-to-RelevanceLinks
[NAACL 2025 Oral] π From redundancy to relevance: Enhancing explainability in multimodal large language models
β115Updated 7 months ago
Alternatives and similar repositories for From-Redundancy-to-Relevance
Users that are interested in From-Redundancy-to-Relevance are comparing it to the libraries listed below
Sorting:
- π [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2β¦β85Updated 2 months ago
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generatorβ113Updated 5 months ago
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Modelsβ96Updated last year
- Multi-granularity Correspondence Learning from Long-term Noisy Videos [ICLR 2024, Oral]β117Updated last year
- (ICCV 2025) Enhance CLIP and MLLM's fine-grained visual representations with generative models.β70Updated 2 months ago
- [ACL 2023 Findings] FACTUAL dataset, the textual scene graph parser trained on FACTUAL.β115Updated 2 months ago
- Linguistic-Aware Patch Slimming Framework for Fine-grained Cross-Modal Alignment, CVPR, 2024β99Updated 2 months ago
- [ICML 2025] Official repository for paper "Scaling Video-Language Models to 10K Frames via Hierarchical Differential Distillation"β176Updated this week
- A collection of multimodal reasoning papers, codes, datasets, benchmarks and resources.β299Updated last week
- [ECCV 2024] Efficient Inference of Vision Instruction-Following Models with Elastic Cacheβ42Updated last year
- [NAACL 2025] SIUO: Cross-Modality Safety Alignmentβ113Updated 7 months ago
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Modelsβ178Updated 10 months ago
- [ICCV 2025] Boosting MLLM Reasoning with Text-Debiased Hint-GRPOβ33Updated 2 months ago
- [ICCV 2025] SparseMM: Head Sparsity Emerges from Visual Concept Responses in MLLMsβ71Updated 2 months ago
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Modelsβ151Updated 9 months ago
- [ICLR'24] Democratizing Fine-grained Visual Recognition with Large Language Modelsβ183Updated last year
- [CVPR 2023] Official implementation of the paper: Fine-grained Audible Video Descriptionβ73Updated last year
- An open-source implementation for training LLaVA-NeXT.β419Updated 10 months ago
- β68Updated 6 months ago
- [MM'24 Oral] Prior Knowledge Integration via LLM Encoding and Pseudo Event Regulation for Video Moment Retrievalβ130Updated last year
- β70Updated 8 months ago
- A collection of token reduction (token pruning, merging, clustering, etc.) techniques for ML/AIβ150Updated last month
- Your efficient and accurate answer verification system for RL training.β38Updated 2 months ago
- Official implementation for the paper "SeePhys: Does Seeing Help Thinking? -- Benchmarking Vision-Based Physics Reasoning"β39Updated last month
- u-LLaVA: Unifying Multi-Modal Tasks via Large Language Modelβ134Updated 5 months ago
- (NeurIPS 2024) Official PyTorch implementation of LOVA3β90Updated 5 months ago
- A Gaussian dense reward framework for GUI grounding trainingβ223Updated 3 weeks ago
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025β266Updated 3 months ago
- β¨β¨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracyβ291Updated 4 months ago
- GPT4Vis: What Can GPT-4 Do for Zero-shot Visual Recognition?β184Updated last year