Hoar012 / RAP-MLLMLinks
[CVPR 2025] RAP: Retrieval-Augmented Personalization
☆76Updated last week
Alternatives and similar repositories for RAP-MLLM
Users that are interested in RAP-MLLM are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆74Updated 2 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆199Updated 4 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆128Updated 4 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆139Updated 3 months ago
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆154Updated 8 months ago
- ☆83Updated last year
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆163Updated last month
- [ICCV 2025] VisRL: Intention-Driven Visual Perception via Reinforced Reasoning☆40Updated 3 weeks ago
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆55Updated 8 months ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆108Updated 6 months ago
- ☆75Updated 7 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆38Updated last year
- ☆130Updated 8 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistant☆65Updated last year
- Official implement of MIA-DPO☆67Updated 10 months ago
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆106Updated last year
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆134Updated 8 months ago
- [CVPR 2025 🔥]A Large Multimodal Model for Pixel-Level Visual Grounding in Videos☆90Updated 7 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆96Updated 4 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆91Updated this week
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆136Updated 6 months ago
- ☆98Updated 3 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆108Updated 6 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆58Updated 5 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆74Updated 6 months ago
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation☆234Updated 3 months ago
- This is a collection of recent papers on reasoning in video generation models.☆66Updated this week
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆80Updated last year
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆74Updated 4 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆77Updated last year