RhapsodyAILab / MiniCPM-V-EmbeddingLinks
☆29Updated 9 months ago
Alternatives and similar repositories for MiniCPM-V-Embedding
Users that are interested in MiniCPM-V-Embedding are comparing it to the libraries listed below
Sorting:
- Our 2nd-gen LMM☆33Updated last year
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆60Updated 7 months ago
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆37Updated 8 months ago
- ☆56Updated last year
- A Token-level Text Image Foundation Model for Document Understanding☆92Updated last month
- ☆73Updated last year
- ACL 2025: Synthetic data generation pipelines for text-rich images.☆72Updated 3 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated 11 months ago
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆37Updated 11 months ago
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated 8 months ago
- MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering. A comprehensive evaluation of multimodal large model multilingua…☆59Updated 2 weeks ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆103Updated last week
- ☆17Updated last year
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆78Updated 6 months ago
- 从零到一实现了一个多模态大模型,并命名为Reyes(睿视),R:睿,eyes:眼。Reyes的参数量为8B,视觉编码器使用的是InternViT-300M-448px-V2_5,语言模型侧使用的是Qwen2.5-7B-Instruct,Reyes也通过一个两层MLP投影层连…☆13Updated 3 months ago
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆88Updated this week
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆84Updated 7 months ago
- ☆18Updated 4 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆47Updated 5 months ago
- Video dataset dedicated to portrait-mode video recognition.☆49Updated 5 months ago
- ☆11Updated 9 months ago
- A Survey of Multimodal Retrieval-Augmented Generation☆18Updated last month
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding☆61Updated last month
- Empirical Study Towards Building An Effective Multi-Modal Large Language Model☆22Updated last year
- ☆35Updated 8 months ago
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆62Updated 2 weeks ago
- LMM solved catastrophic forgetting, AAAI2025☆43Updated last month
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆156Updated 2 months ago
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆34Updated 11 months ago
- Precision Search through Multi-Style Inputs☆69Updated last month