google-research-datasets / seq2actLinks
This repository contains the opensource version of the datasets were used for different parts of training and testing of models that ground natural language to UI actions as described in the paper: "Mapping Natural Language Instructions to Mobile UI Action Sequences" by Yang Li, Jiacong He, Xin Zhou, Yuan Zhang, and Jason Baldridge, which is acc…
☆32Updated 4 years ago
Alternatives and similar repositories for seq2act
Users that are interested in seq2act are comparing it to the libraries listed below
Sorting:
- Mobile App Tasks with Iterative Feedback (MoTIF): Addressing Task Feasibility in Interactive Visual Environments☆61Updated 10 months ago
- Seq2act: Mapping Natural Language Instructions to Mobile UI Action Sequences from Google research☆14Updated 4 years ago
- The dataset includes UI object type labels (e.g., BUTTON, IMAGE, CHECKBOX) that describes the semantic type of an UI object on Android ap…☆52Updated 3 years ago
- A Universal Platform for Training and Evaluation of Mobile Interaction☆47Updated 4 months ago
- It includes two datasets that are used in the downstream tasks for evaluating UIBert: App Similar Element Retrieval data and Visual Item …☆44Updated 3 years ago
- [ICLR 2024] Trajectory-as-Exemplar Prompting with Memory for Computer Control☆57Updated 5 months ago
- The dataset includes screen summaries that describes Android app screenshot's functionalities. It is used for training and evaluation of …☆57Updated 3 years ago
- [EMNLP 2022] The baseline code for META-GUI dataset☆14Updated 11 months ago
- (ICLR 2025) The Official Code Repository for GUI-World.☆60Updated 6 months ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆239Updated 11 months ago
- The dataset includes widget captions that describes UI element's functionalities. It is used for training and evaluation of the widget ca…☆21Updated 4 years ago
- Consists of ~500k human annotations on the RICO dataset identifying various icons based on their shapes and semantics, and associations b…☆28Updated 11 months ago
- ☆18Updated last year
- [ACL 2024] On the Multi-turn Instruction Following for Conversational Web Agents☆16Updated 8 months ago
- ☆29Updated 8 months ago
- GUI Odyssey is a comprehensive dataset for training and evaluating cross-app navigation agents. GUI Odyssey consists of 7,735 episodes fr…☆116Updated 7 months ago
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆88Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- Official implementation for "Android in the Zoo: Chain-of-Action-Thought for GUI Agents" (Findings of EMNLP 2024)☆89Updated 8 months ago
- GUICourse: From General Vision Langauge Models to Versatile GUI Agents☆117Updated 11 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆58Updated last year
- Code repo for "Read Anywhere Pointed: Layout-aware GUI Screen Reading with Tree-of-Lens Grounding"☆28Updated 10 months ago
- ☆53Updated last year
- Syntax Error-Free and Generalizable Tool Use for LLMs via Finite-State Decoding☆27Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆61Updated 8 months ago
- ☆59Updated last year
- LLM Dynamic Planner - Combining LLM with PDDL Planners to solve an embodied task☆44Updated 5 months ago
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆72Updated last year
- [ACL2024] Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool Utilization in Real-World Complex Scenarios☆58Updated last year
- Code and data for the ACL 2024 Findings paper "Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart Captioning"☆26Updated last year