eric-ai-lab / Screen-Point-and-Read
Code repo for "Read Anywhere Pointed: Layout-aware GUI Screen Reading with Tree-of-Lens Grounding"
☆26Updated 8 months ago
Alternatives and similar repositories for Screen-Point-and-Read:
Users that are interested in Screen-Point-and-Read are comparing it to the libraries listed below
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆42Updated last month
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆55Updated 6 months ago
- [NeurIPS 2024 D&B] VideoGUI: A Benchmark for GUI Automation from Instructional Videos☆33Updated 2 weeks ago
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆30Updated 3 months ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆50Updated 4 months ago
- The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search"☆24Updated last month
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆60Updated 9 months ago
- Official implementation of "Gemini in Reasoning: Unveiling Commonsense in Multimodal Large Language Models"☆36Updated last year
- ☆84Updated 2 weeks ago
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆35Updated 8 months ago
- ☆30Updated last year
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆22Updated 3 weeks ago
- ☆40Updated 2 weeks ago
- (ICLR 2025) The Official Code Repository for GUI-World.☆54Updated 4 months ago
- This is the implementation of CounterCurate, the data curation pipeline of both physical and semantic counterfactual image-caption pairs.☆18Updated 9 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆27Updated last week
- Official implementation of Leveraging Visual Tokens for Extended Text Contexts in Multi-Modal Learning☆15Updated 5 months ago
- Codes for ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding☆24Updated 3 months ago
- [NeurIPS 2024] A task generation and model evaluation system for multimodal language models.☆70Updated 4 months ago
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆32Updated last year
- Multimodal RewardBench☆38Updated 2 months ago
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆49Updated last year
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆49Updated 3 weeks ago
- [NAACL 2025 Oral] Multimodal Needle in a Haystack (MMNeedle): Benchmarking Long-Context Capability of Multimodal Large Language Models☆42Updated last month
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆41Updated 6 months ago
- [EMNLP 2024 Findings] ProSA: Assessing and Understanding the Prompt Sensitivity of LLMs☆24Updated 6 months ago
- Official Repository of Personalized Visual Instruct Tuning☆28Updated last month
- This repo contains code for the paper "Both Text and Images Leaked! A Systematic Analysis of Data Contamination in Multimodal LLM"☆13Updated 3 weeks ago
- ☆29Updated 6 months ago
- Code for paper: Unified Text-to-Image Generation and Retrieval☆15Updated 9 months ago