google-research-datasets / uicritLinks
UICrit is a dataset containing human-generated natural language design critiques, corresponding bounding boxes for each critique, and design quality ratings for 1,000 mobile UIs from RICO. This dataset was collected for our UIST '24 paper: https://arxiv.org/abs/2407.08850.
☆22Updated 8 months ago
Alternatives and similar repositories for uicrit
Users that are interested in uicrit are comparing it to the libraries listed below
Sorting:
- Code for the paper "AutoPresent: Designing Structured Visuals From Scratch" (CVPR 2025)☆113Updated last month
- A benchmark dataset for evaluating LLM's SVG editing capabilities☆34Updated 9 months ago
- Continuous diffusion for layout generation☆45Updated 5 months ago
- Official Repo of Graphist☆123Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆130Updated last year
- ☆25Updated last year
- ☆21Updated 3 months ago
- [ACL'25 Main] ChartCoder: Advancing Multimodal Large Language Model for Chart-to-Code Generation☆56Updated 3 weeks ago
- CycleReward is a reward model trained on cycle consistency preferences to measure image-text alignment.☆31Updated last month
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆60Updated 5 months ago
- ☆21Updated 11 months ago
- Official implementation of "Describing Differences in Image Sets with Natural Language" (CVPR 2024 Oral)☆120Updated last year
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆45Updated last year
- OpenCOLE: Towards Reproducible Automatic Graphic Design Generation [Inoue+, CVPRW2024 (GDUG)]☆74Updated 4 months ago
- [CVPR 2023] SketchXAI: A First Look at Explainability for Human Sketches☆24Updated last year
- Preference Learning for LLaVA☆47Updated 8 months ago
- Implementation of CanvasVAE: Learning to Generate Vector Graphic Documents, ICCV 2021☆67Updated 2 years ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆86Updated 8 months ago
- The dataset includes UI object type labels (e.g., BUTTON, IMAGE, CHECKBOX) that describes the semantic type of an UI object on Android ap…☆52Updated 3 years ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆46Updated 8 months ago
- Implementation and dataset for paper "Can MLLMs Perform Text-to-Image In-Context Learning?"☆39Updated last month
- [CVPR 2023 highlight] Towards Flexible Multi-modal Document Models☆57Updated last year
- Code and data for EMNLP 2023 paper "Grounding Visual Illusions in Language: Do Vision-Language Models Perceive Illusions Like Humans?"☆13Updated last year
- https://arxiv.org/abs/2209.15162☆50Updated 2 years ago
- Multimodal RewardBench☆42Updated 4 months ago
- Code, Data and Red Teaming for ZeroBench☆46Updated 2 months ago
- ☆37Updated last year
- Holistic evaluation of multimodal foundation models☆48Updated 11 months ago
- A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo☆34Updated 11 months ago
- Code repo for "Read Anywhere Pointed: Layout-aware GUI Screen Reading with Tree-of-Lens Grounding"☆28Updated 11 months ago