CnFaker / LLaVA-SPLinks
[ICCV 2025] The official pytorch implement of "LLaVA-SP: Enhancing Visual Representation with Visual Spatial Tokens for MLLMs".
☆15Updated last month
Alternatives and similar repositories for LLaVA-SP
Users that are interested in LLaVA-SP are comparing it to the libraries listed below
Sorting:
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆240Updated 3 months ago
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆149Updated 4 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆186Updated 3 weeks ago
- [CVPR 2025 Highlight] Your Large Vision-Language Model Only Needs A Few Attention Heads For Visual Grounding☆21Updated last month
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆87Updated 3 months ago
- Awesome MLLMs/Benchmarks for Short/Long/Streaming Video Understanding☆28Updated 6 months ago
- Collections of Papers and Projects for Multimodal Reasoning.☆105Updated 3 months ago
- A Fine-grained Benchmark for Video Captioning and Retrieval☆19Updated 3 weeks ago
- [ICLR2025] Text4Seg: Reimagining Image Segmentation as Text Generation☆110Updated 2 weeks ago
- [CVPR2025] Official implementation of the paper "Multi-Layer Visual Feature Fusion in Multimodal LLMs: Methods, Analysis, and Best Practi…☆27Updated last month
- [ICCV25 Oral] Token Activation Map to Visually Explain Multimodal LLMs☆51Updated 2 weeks ago
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆30Updated 3 months ago
- Reason-before-Retrieve: One-Stage Reflective Chain-of-Thoughts for Training-Free Zero-Shot Composed Image Retrieval [CVPR 2025 Highlight]☆54Updated last month
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆64Updated last week
- [CVPR 2025] Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Att…☆26Updated 5 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆89Updated 2 months ago
- [CVPR' 25] Interleaved-Modal Chain-of-Thought☆70Updated 3 months ago
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆37Updated 5 months ago
- R1-like Video-LLM for Temporal Grounding☆109Updated last month
- [CVPR2025] Number it: Temporal Grounding Videos like Flipping Manga☆110Updated 4 months ago
- A curated list of publications on image and video segmentation leveraging Multimodal Large Language Models (MLLMs), highlighting state-of…☆109Updated this week
- ☆132Updated 5 months ago
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆45Updated 4 months ago
- [ICLR 2025] TimeSuite: Improving MLLMs for Long Video Understanding via Grounded Tuning☆40Updated 4 months ago
- [ACM MM 2025] TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆66Updated 3 weeks ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiency☆47Updated 2 months ago
- [NeurIPS 2023] The official implementation of SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation☆32Updated last year
- Video Chain of Thought, Codes for ICML 2024 paper: "Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition"☆157Updated 5 months ago
- ☆93Updated 4 months ago
- [CVPR2024] GSVA: Generalized Segmentation via Multimodal Large Language Models☆139Updated 10 months ago