rednote-hilab / dots.vlm1Links
The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.
☆136Updated this week
Alternatives and similar repositories for dots.vlm1
Users that are interested in dots.vlm1 are comparing it to the libraries listed below
Sorting:
- ☆173Updated 6 months ago
- A Simple Framework of Small-scale LMMs for Video Understanding☆73Updated last month
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆72Updated 2 months ago
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆244Updated 2 months ago
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer☆230Updated last year
- ☆87Updated last year
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆245Updated 5 months ago
- [ICCV'25] Explore the Limits of Omni-modal Pretraining at Scale☆114Updated 11 months ago
- [ICCV2025] A Token-level Text Image Foundation Model for Document Understanding☆111Updated last week
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆62Updated 9 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆44Updated last year
- Official code implementation of Slow Perception:Let's Perceive Geometric Figures Step-by-step☆131Updated last week
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)☆164Updated last year
- mllm-npu: training multimodal large language models on Ascend NPUs☆91Updated 11 months ago
- ☆86Updated last year
- ☆119Updated last year
- Rex-Thinker: Grounded Object Refering via Chain-of-Thought Reasoning☆102Updated last month
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆125Updated 9 months ago
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆30Updated last month
- The Next Step Forward in Multimodal LLM Alignment☆170Updated 3 months ago
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆385Updated 3 months ago
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆140Updated 2 months ago
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learning☆308Updated 2 months ago
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated 10 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆116Updated 8 months ago
- [ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆83Updated last month
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆73Updated 9 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆162Updated 4 months ago
- Official repo of Griffon series including v1(ECCV 2024), v2, and G☆227Updated 2 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆267Updated last year