Token-family / TokenFDLinks
[ICCV2025] A Token-level Text Image Foundation Model for Document Understanding
☆121Updated last month
Alternatives and similar repositories for TokenFD
Users that are interested in TokenFD are comparing it to the libraries listed below
Sorting:
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆63Updated 11 months ago
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆260Updated 3 weeks ago
- The official code for NeurIPS 2024 paper: Harmonizing Visual Text Comprehension and Generation☆129Updated 11 months ago
- ☆29Updated last year
- Vary-tiny codebase upon LAVIS (for training from scratch)and a PDF image-text pairs data (about 600k including English/Chinese)☆86Updated last year
- [arXiv: 2505.17163] OCR-Reasoning Benchmark: Unveiling the True Capabilities of MLLMs in Complex Text-Rich Image Reasoning☆64Updated 2 months ago
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆274Updated 2 months ago
- Margin-based Vision Transformer☆47Updated 3 weeks ago
- ☆42Updated 8 months ago
- Evaluation of the Optical Character Recognition (OCR) capabilities of GPT-4V(ision)☆125Updated last year
- ☆57Updated last year
- A Survey of Multimodal Retrieval-Augmented Generation☆19Updated 6 months ago
- ☆98Updated 10 months ago
- Research Code for Multimodal-Cognition Team in Ant Group☆168Updated last week
- ☆183Updated last year
- Official code implementation of Slow Perception:Let's Perceive Geometric Figures Step-by-step☆132Updated 2 months ago
- ☆74Updated last year
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆126Updated 11 months ago
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆396Updated 5 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆118Updated last year
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆35Updated 3 months ago
- A Simple Framework of Small-scale LMMs for Video Understanding☆94Updated 4 months ago
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆38Updated last year
- [arXiv: 2505.12307] LogicOCR: Do Your Large Multimodal Models Excel at Logical Reasoning on Text-Rich Images?☆34Updated 5 months ago
- [ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆94Updated 2 months ago
- The official repo for “TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding”.☆43Updated last year
- ☆186Updated 8 months ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆210Updated last year
- 【ArXiv】PDF-Wukong: A Large Multimodal Model for Efficient Long PDF Reading with End-to-End Sparse Sampling☆126Updated 4 months ago
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated last year