NeverMoreLCH / SearchLVLMsLinks
Repository for the NeurIPS 2024 paper "SearchLVLMs: A Plug-and-Play Framework for Augmenting Large Vision-Language Models by Searching Up-to-Date Internet Knowledge"
☆26Updated last year
Alternatives and similar repositories for SearchLVLMs
Users that are interested in SearchLVLMs are comparing it to the libraries listed below
Sorting:
- ☆90Updated last year
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆130Updated last year
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆91Updated last year
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆90Updated 8 months ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆63Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆86Updated last year
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆136Updated 6 months ago
- Official repository of MMDU dataset☆103Updated last year
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆104Updated 8 months ago
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search too…☆392Updated 5 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆123Updated last year
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆191Updated 10 months ago
- [ACM MM 2025] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆103Updated 2 months ago
- ☆75Updated last year
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 8 months ago
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources☆215Updated 4 months ago
- ☆37Updated last year
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)☆171Updated last year
- [ICML 2024] | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆116Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- Offical Repository of "AtomThink: Multimodal Slow Thinking with Atomic Step Reasoning"☆62Updated 2 months ago
- A Survey on Benchmarks of Multimodal Large Language Models☆147Updated 7 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆77Updated last year
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆94Updated last year
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated 2 years ago
- OpenMMReasoner: Pushing the Frontiers for Multimodal Reasoning with an Open and General Recipe☆141Updated last month
- ☆111Updated last year
- [TMLR 25] SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆149Updated 4 months ago
- ☆66Updated 2 years ago
- Evaluating Knowledge Acquisition from Multi-Discipline Professional Videos☆65Updated 5 months ago