google-research-datasets / uicritLinks
UICrit is a dataset containing human-generated natural language design critiques, corresponding bounding boxes for each critique, and design quality ratings for 1,000 mobile UIs from RICO. This dataset was collected for our UIST '24 paper: https://arxiv.org/abs/2407.08850.
☆26Updated last year
Alternatives and similar repositories for uicrit
Users that are interested in uicrit are comparing it to the libraries listed below
Sorting:
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆98Updated last year
- Code for the paper "AutoPresent: Designing Structured Visuals From Scratch" (CVPR 2025)☆146Updated 7 months ago
- Official Repo of Graphist☆129Updated last year
- Official repository for LLaVA-Reward (ICCV 2025): Multimodal LLMs as Customized Reward Models for Text-to-Image Generation☆22Updated 5 months ago
- Assessing Context-Aware Creative Intelligence in MLLMs☆23Updated 5 months ago
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆45Updated 2 years ago
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆69Updated 8 months ago
- Preference Learning for LLaVA☆58Updated last year
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆48Updated last year
- The dataset includes UI object type labels (e.g., BUTTON, IMAGE, CHECKBOX) that describes the semantic type of an UI object on Android ap…☆53Updated 3 years ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆85Updated 11 months ago
- Continuous diffusion for layout generation☆52Updated 10 months ago
- We introduce new approach, Token Reduction using CLIP Metric (TRIM), aimed at improving the efficiency of MLLMs without sacrificing their…☆20Updated this week
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆155Updated 3 months ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆40Updated 10 months ago
- LLaVA-NeXT-Image-Llama3-Lora, Modified from https://github.com/arielnlee/LLaVA-1.6-ft☆45Updated last year
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆92Updated last year
- A Large-scale Dataset for training and evaluating model's ability on Dense Text Image Generation☆86Updated 3 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆91Updated last year
- [NeurIPS 2024 D&B] VideoGUI: A Benchmark for GUI Automation from Instructional Videos☆48Updated 6 months ago
- Multimodal RewardBench☆58Updated 10 months ago
- ☆31Updated last year
- ☆24Updated last year
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆88Updated 5 months ago
- Pytorch implementation of Twelve Labs' Video Foundation Model evaluation framework & open embeddings☆29Updated last year
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆64Updated 10 months ago
- [CVPR2025] VideoICL: Confidence-based Iterative In-context Learning for Out-of-Distribution Video Understanding☆22Updated 9 months ago
- ECCV2024_Parrot Captions Teach CLIP to Spot Text☆66Updated last year
- Official implement of MIA-DPO☆70Updated 11 months ago
- ☆80Updated 6 months ago