hbhalpha / MDRLinks
☆15Updated 3 weeks ago
Alternatives and similar repositories for MDR
Users that are interested in MDR are comparing it to the libraries listed below
Sorting:
- ☆16Updated 2 months ago
- MegaPairs: Massive Data Synthesis For Universal Multimodal Retrieval☆183Updated 2 weeks ago
- ☆59Updated 2 months ago
- Repo for Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent☆332Updated last month
- Research Code for Multimodal-Cognition Team in Ant Group☆147Updated 2 weeks ago
- ☆269Updated last week
- A Survey on Multimodal Retrieval-Augmented Generation☆211Updated this week
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆198Updated last year
- A jounery to real multimodel R1 ! We are doing on large-scale experiment☆306Updated 3 weeks ago
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆60Updated 9 months ago
- 【ArXiv】PDF-Wukong: A Large Multimodal Model for Efficient Long PDF Reading with End-to-End Sparse Sampling☆117Updated 7 months ago
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆345Updated 3 months ago
- 一些大语言模型和多模态模型的应用,主要包括小模型,Agent,跨模态搜索,OCR、RAG、ChatBot等等☆176Updated this week
- This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-sta…☆588Updated last month
- [EMNLP 2024 Findings] The official PyTorch implementation of EchoSight: Advancing Visual-Language Models with Wiki Knowledge.☆61Updated 2 months ago
- This project aims to collect and collate various datasets for multimodal large model training, including but not limited to pre-training …☆47Updated last month
- The Next Step Forward in Multimodal LLM Alignment☆161Updated last month
- Repo for "VRAG-RL: Empower Vision-Perception-Based RAG for Visually Rich Information Understanding via Iterative Reasoning with Reinforce…☆156Updated this week
- ☆362Updated 3 months ago
- Reading notes about Multimodal Large Language Models, Large Language Models, and Diffusion Models☆436Updated 3 weeks ago
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆643Updated last week
- Building a VLM model starts from the basic module.☆16Updated last year
- The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆74Updated 2 weeks ago
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR25]☆237Updated this week
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆207Updated last month
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆297Updated 3 months ago
- 从零到一实现了一个多模态大模型,并命名为Reyes(睿视),R:睿,eyes:眼。Reyes的参数量为8B,视觉编码器使用的是InternViT-300M-448px-V2_5,语言模型侧使用的是Qwen2.5-7B-Instruct,Reyes也通过一个两层MLP投影层连…☆13Updated 3 months ago
- R1-onevision, a visual language model capable of deep CoT reasoning.☆525Updated last month
- New generation of CLIP with fine grained discrimination capability, ICML2025☆180Updated 2 weeks ago
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆143Updated 2 months ago