JackYFL / awesome-VLLMsLinks
This repository collects papers on VLLM applications. We will update new papers irregularly.
☆145Updated last month
Alternatives and similar repositories for awesome-VLLMs
Users that are interested in awesome-VLLMs are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆180Updated 3 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆342Updated 6 months ago
- Official code implementation of Perception R1: Pioneering Perception Policy with Reinforcement Learning☆219Updated 2 weeks ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆211Updated last week
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆115Updated 4 months ago
- Official repository for VisionZip (CVPR 2025)☆319Updated last month
- Collections of Papers and Projects for Multimodal Reasoning.☆105Updated 2 months ago
- The Next Step Forward in Multimodal LLM Alignment☆169Updated 2 months ago
- The official implement of "VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement Learning"☆222Updated last month
- A most Frontend Collection and survey of vision-language model papers, and models GitHub repository☆259Updated last week
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"☆452Updated last month
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆65Updated 6 months ago
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆350Updated 4 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆80Updated 2 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆83Updated last month
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆224Updated 2 months ago
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".☆128Updated last month
- 📚 Collection of token-level model compression resources.☆140Updated last week
- [AAAI-25] Cobra: Extending Mamba to Multi-modal Large Language Model for Efficient Inference☆284Updated 6 months ago
- This is the official implementation of our paper "Video-RAG: Visually-aligned Retrieval-Augmented Long Video Comprehension"☆208Updated this week
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆82Updated 2 weeks ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆311Updated 4 months ago
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆147Updated 4 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆184Updated this week
- VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuning☆162Updated last month
- Pruning the VLLMs☆97Updated 7 months ago
- A Survey on Benchmarks of Multimodal Large Language Models☆119Updated 2 weeks ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆128Updated 4 months ago
- Pixel-Level Reasoning Model trained with RL☆158Updated 2 weeks ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆325Updated 11 months ago