yh-hust / VisuRiddlesLinks
VisuRiddles: Fine-grained Perception is a important thing for Multimodal Large Models in Riddles Solving
☆18Updated 3 months ago
Alternatives and similar repositories for VisuRiddles
Users that are interested in VisuRiddles are comparing it to the libraries listed below
Sorting:
- The official repo for “TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding”.☆44Updated last year
- R1-Vision: Let's first take a look at the image☆48Updated 11 months ago
- [CVPR 2025] LamRA: Large Multimodal Model as Your Advanced Retrieval Assistant☆174Updated 6 months ago
- MM-Eureka V0 also called R1-Multimodal-Journey, Latest version is in MM-Eureka☆322Updated 7 months ago
- AAAI 2024: Visual Instruction Generation and Correction☆96Updated last year
- ☆14Updated 7 months ago
- ☆13Updated 6 months ago
- SVIT: Scaling up Visual Instruction Tuning☆166Updated last year
- Turning a CLIP Model into a Scene Text Detector (CVPR2023) | Turning a CLIP Model into a Scene Text Spotter (TPAMI)☆200Updated last year
- The proposed simulated dataset consisting of 9,536 charts and associated data annotations in CSV format.☆26Updated last year
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆98Updated last year
- Latest open-source "Thinking with images" (O3/O4-mini) papers, covering training-free, SFT-based, and RL-enhanced methods for "fine-grain…☆110Updated 5 months ago
- The official implementation of RAR☆92Updated last month
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆153Updated 4 months ago
- LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning☆75Updated 8 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆203Updated last year
- ☆359Updated 2 years ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆280Updated last year
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆295Updated last year
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆410Updated 8 months ago
- ☆24Updated last year
- [CVPR 2024] Official Code for the Paper "Compositional Chain-of-Thought Prompting for Large Multimodal Models"☆145Updated last year
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆93Updated 2 months ago
- Official repo of Griffon series including v1(ECCV 2024), v2(ICCV 2025), G, and R, and also the RL tool Vision-R1.☆247Updated 5 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆180Updated last year
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆221Updated 10 months ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆244Updated 5 months ago
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆284Updated 8 months ago
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆91Updated last year
- A collection of visual instruction tuning datasets.☆76Updated last year