OpenGVLab / Awesome-LLM4ToolLinks
A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools
☆68Updated 2 years ago
Alternatives and similar repositories for Awesome-LLM4Tool
Users that are interested in Awesome-LLM4Tool are comparing it to the libraries listed below
Sorting:
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆124Updated 3 months ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- ☆50Updated last year
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆45Updated 9 months ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆27Updated last month
- Reading list for Multimodal Large Language Models☆68Updated 2 years ago
- A curated list of resources about long-context in large-language models and video understanding.☆31Updated 2 years ago
- PPTC Benchmark: Evaluating Large Language Models for PowerPoint Task Completion☆56Updated last year
- ☆65Updated last year
- An official codebase for paper " CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos (ICCV 23)"☆52Updated 2 years ago
- ☆66Updated 2 years ago
- ☆74Updated last year
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆104Updated last year
- Attaching human-like eyes to the large language model. The codes of IEEE TMM paper "LMEye: An Interactive Perception Network for Large La…☆48Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆124Updated 4 months ago
- Vision Large Language Models trained on M3IT instruction tuning dataset☆17Updated 2 years ago
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆29Updated last year
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆58Updated 10 months ago
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated last year
- This repository is maintained to release dataset and models for multimodal puzzle reasoning.☆101Updated 6 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆102Updated 5 months ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆90Updated 2 years ago
- Data and code for NeurIPS 2021 Paper "IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning".☆52Updated last year
- Code for ACL2023 paper: Pre-Training to Learn in Context☆107Updated last year
- ☆31Updated last year
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆89Updated 10 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆90Updated last year