OpenGVLab / Awesome-LLM4ToolLinks
A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools
☆67Updated last year
Alternatives and similar repositories for Awesome-LLM4Tool
Users that are interested in Awesome-LLM4Tool are comparing it to the libraries listed below
Sorting:
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- ☆50Updated last year
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆123Updated last month
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆39Updated 7 months ago
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆105Updated last year
- Reading list for Multimodal Large Language Models☆68Updated last year
- ☆31Updated last year
- Attaching human-like eyes to the large language model. The codes of IEEE TMM paper "LMEye: An Interactive Perception Network for Large La…☆48Updated 11 months ago
- Vision Large Language Models trained on M3IT instruction tuning dataset☆17Updated last year
- ☆73Updated last year
- PPTC Benchmark: Evaluating Large Language Models for PowerPoint Task Completion☆55Updated last year
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆57Updated 8 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆102Updated 4 months ago
- ☆66Updated 2 years ago
- official repo for the paper "Learning From Mistakes Makes LLM Better Reasoner"☆59Updated last year
- This repo contains code and data for ICLR 2025 paper MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs☆31Updated 4 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆50Updated last month
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆29Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated 11 months ago
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆33Updated last year
- The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search"☆25Updated 2 months ago
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆23Updated last week
- ☆55Updated last year
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆24Updated 9 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- ☆64Updated last year
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆50Updated 8 months ago
- ☆65Updated last year
- A curated list of resources about long-context in large-language models and video understanding.☆31Updated last year