OpenGVLab / Awesome-LLM4ToolLinks
A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools
☆67Updated last year
Alternatives and similar repositories for Awesome-LLM4Tool
Users that are interested in Awesome-LLM4Tool are comparing it to the libraries listed below
Sorting:
- ☆51Updated last year
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated 11 months ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated 11 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆50Updated 5 months ago
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆122Updated last week
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆57Updated 7 months ago
- ☆64Updated last year
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆84Updated 11 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆78Updated 4 months ago
- A curated list of resources about long-context in large-language models and video understanding.☆31Updated last year
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆24Updated 8 months ago
- ☆63Updated last year
- ACL 2025: Synthetic data generation pipelines for text-rich images.☆72Updated 3 months ago
- Large Language Models Can Self-Improve in Long-context Reasoning☆69Updated 6 months ago
- This repo contains code and data for ICLR 2025 paper MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs☆31Updated 2 months ago
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆43Updated 3 months ago
- ☆30Updated last year
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆39Updated 6 months ago
- The code and data for the paper JiuZhang3.0☆45Updated last year
- ☆99Updated last year
- ☆73Updated last year
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆69Updated 7 months ago
- ☆36Updated 8 months ago
- Official repo for StableLLAVA☆95Updated last year
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆101Updated 2 months ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆51Updated 5 months ago
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆46Updated last year
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆28Updated 10 months ago
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆50Updated 7 months ago