OpenGVLab / Awesome-LLM4Tool
A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools
☆68Updated last year
Alternatives and similar repositories for Awesome-LLM4Tool:
Users that are interested in Awesome-LLM4Tool are comparing it to the libraries listed below
- ☆59Updated last year
- ☆47Updated last year
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆41Updated last week
- Touchstone: Evaluating Vision-Language Models by Language Models☆82Updated last year
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆49Updated 4 months ago
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆117Updated this week
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆43Updated 8 months ago
- Official Code of IdealGPT☆34Updated last year
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated last month
- PPTC Benchmark: Evaluating Large Language Models for PowerPoint Task Completion☆49Updated 11 months ago
- A curated list of resources about long-context in large-language models and video understanding.☆30Updated last year
- official repo for the paper "Learning From Mistakes Makes LLM Better Reasoner"☆59Updated last year
- ☆24Updated 3 months ago
- ☆73Updated 11 months ago
- ☆38Updated 3 months ago
- [NeurIPS2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆92Updated 2 months ago
- MATH-Vision dataset and code to measure Multimodal Mathematical Reasoning capabilities.☆86Updated 4 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆126Updated 4 months ago
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆21Updated 2 months ago
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆26Updated 7 months ago
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆23Updated 4 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆43Updated last year
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆77Updated 7 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆63Updated 4 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆71Updated 3 weeks ago
- Preference Learning for LLaVA☆37Updated 3 months ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆96Updated last month
- ☆29Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆40Updated 7 months ago