OpenGVLab / Awesome-LLM4ToolLinks
A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools
☆68Updated 2 years ago
Alternatives and similar repositories for Awesome-LLM4Tool
Users that are interested in Awesome-LLM4Tool are comparing it to the libraries listed below
Sorting:
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆123Updated 4 months ago
- ☆50Updated last year
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- A curated list of resources about long-context in large-language models and video understanding.☆31Updated 2 years ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- Reading list for Multimodal Large Language Models☆68Updated 2 years ago
- ☆65Updated last year
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆46Updated 10 months ago
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆103Updated last year
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆46Updated 2 years ago
- Attaching human-like eyes to the large language model. The codes of IEEE TMM paper "LMEye: An Interactive Perception Network for Large La…☆48Updated last year
- Vision Large Language Models trained on M3IT instruction tuning dataset☆17Updated 2 years ago
- ☆31Updated last year
- ☆74Updated last year
- ☆66Updated 2 years ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆59Updated 11 months ago
- This repo contains code and data for ICLR 2025 paper MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs☆31Updated 7 months ago
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆27Updated 3 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- Code for "Small Models are Valuable Plug-ins for Large Language Models"☆131Updated 2 years ago
- Multimodal-Procedural-Planning☆92Updated 2 years ago
- PPTC Benchmark: Evaluating Large Language Models for PowerPoint Task Completion☆57Updated last year
- Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆90Updated 2 years ago
- [ICLR 2024] Trajectory-as-Exemplar Prompting with Memory for Computer Control☆60Updated 9 months ago
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agents☆46Updated 7 months ago
- Code for ACL2023 paper: Pre-Training to Learn in Context☆107Updated last year
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆126Updated 5 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆82Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 4 months ago
- An official codebase for paper " CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos (ICCV 23)"☆52Updated 2 years ago