zhengxuJosh / Awesome-RAG-VisionLinks
Awesome-RAG-Vision: a curated list of advanced retrieval augmented generation (RAG) for Computer Vision
☆192Updated last week
Alternatives and similar repositories for Awesome-RAG-Vision
Users that are interested in Awesome-RAG-Vision are comparing it to the libraries listed below
Sorting:
- The development and future prospects of multimodal reasoning models.☆431Updated last week
- Awesome Reasoning in MLLMs: Papers and Projects about learning to reason with MLLMs, including Chain-of-Thought (CoT), OpenAl o1, and Dee…☆54Updated 3 months ago
- Repo for "VRAG-RL: Empower Vision-Perception-Based RAG for Visually Rich Information Understanding via Iterative Reasoning with Reinforce…☆274Updated last week
- Collection of papers and repos for multimodal chain-of-thought☆84Updated 8 months ago
- Collect every awesome work about r1!☆394Updated 2 months ago
- 💡 VideoMind: A Chain-of-LoRA Agent for Long Video Reasoning☆228Updated 2 weeks ago
- R1-onevision, a visual language model capable of deep CoT reasoning.☆541Updated 3 months ago
- Customize your arXiv recommendation every day.☆109Updated 3 months ago
- [ICLR 2025] The First Multimodal Seach Engine Pipeline and Benchmark for LMMs☆446Updated 5 months ago
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆244Updated 4 months ago
- FlexRAG: A RAG Framework for Information Retrieval and Generation.☆192Updated 3 weeks ago
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆68Updated last month
- A Survey on Multimodal Retrieval-Augmented Generation☆254Updated last week
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆125Updated 8 months ago
- MDocAgent: A Multi-Modal Multi-Agent Framework for Document Understanding☆189Updated 3 months ago
- This project aims to collect and collate various datasets for multimodal large model training, including but not limited to pre-training …☆49Updated 2 months ago
- Repo for Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent☆349Updated 2 months ago
- ☆173Updated 5 months ago
- 《多模态大模型:新一代人工智能技术范式》作者:刘阳,林倞☆220Updated 7 months ago
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search too…☆241Updated last week
- This repository collects papers on VLLM applications. We will update new papers irregularly.☆145Updated last month
- ☆53Updated 4 months ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆308Updated 4 months ago
- ☆82Updated last week
- 该系列的目的是让读者可以在基础的pytorch上,不依赖任何其他现成的外部库,从零开始理解并实现一个大语言模型的所有组成部分,以及训练微调代码,因此读者仅需python,pytorch和最基础深度学习背景知识即可。☆350Updated 4 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆162Updated 3 months ago
- The Next Step Forward in Multimodal LLM Alignment☆169Updated 2 months ago
- Awesome LLM pre-training resources, including data, frameworks, and methods.☆193Updated 2 months ago
- ☆596Updated last week
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆211Updated last week