JackYFL / awesome-VLLMsLinks
This repository collects papers on VLLM applications. We will update new papers irregularly.
☆195Updated 3 weeks ago
Alternatives and similar repositories for awesome-VLLMs
Users that are interested in awesome-VLLMs are comparing it to the libraries listed below
Sorting:
- A most Frontend Collection and survey of vision-language model papers, and models GitHub repository. Continuous updates.☆484Updated 3 weeks ago
- Vision Manus: Your versatile Visual AI assistant☆305Updated 2 months ago
- [NeurIPS 2025] Official code implementation of Perception R1: Pioneering Perception Policy with Reinforcement Learning☆280Updated 5 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆415Updated last year
- Efficient Multimodal Large Language Models: A Survey☆379Updated 8 months ago
- 📚 Collection of token-level model compression resources.☆189Updated 4 months ago
- Official repository for VisionZip (CVPR 2025)☆396Updated 5 months ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆232Updated 2 months ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆218Updated 9 months ago
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"☆582Updated 5 months ago
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆210Updated 2 months ago
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆225Updated 9 months ago
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆380Updated 10 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆790Updated 3 weeks ago
- The Next Step Forward in Multimodal LLM Alignment☆193Updated 8 months ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆536Updated last year
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey☆927Updated last month
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆411Updated 8 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆311Updated 8 months ago
- Official Code for "Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search"☆383Updated 3 months ago
- [NeurIPS 2025] The official repository for our paper, "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reason…☆152Updated 3 months ago
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆257Updated 2 months ago
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)☆281Updated last year
- 📖 This is a repository for organizing papers, codes and other resources related to Visual Reinforcement Learning.☆375Updated this week
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuning☆252Updated 2 months ago
- Official repo of Griffon series including v1(ECCV 2024), v2(ICCV 2025), G, and R, and also the RL tool Vision-R1.☆247Updated 4 months ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆361Updated 3 weeks ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆335Updated last year
- This is the first paper to explore how to effectively use R1-like RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages …☆747Updated 3 months ago
- Awesome papers & datasets specifically focused on long-term videos.☆338Updated 3 months ago