jun0wanan / awesome-large-multimodal-agentsLinks
☆466Updated last year
Alternatives and similar repositories for awesome-large-multimodal-agents
Users that are interested in awesome-large-multimodal-agents are comparing it to the libraries listed below
Sorting:
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.☆739Updated last week
- Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Mod…☆342Updated 7 months ago
- ✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models☆625Updated 4 months ago
- Collect every awesome work about r1!☆420Updated 5 months ago
- Towards Large Multimodal Models as Visual Foundation Agents☆240Updated 6 months ago
- ☆392Updated 3 months ago
- Latest Advances on Long Chain-of-Thought Reasoning☆532Updated 3 months ago
- This is the repository for the Tool Learning survey.☆449Updated 2 months ago
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆510Updated 5 months ago
- This repository collects awesome survey, resource, and paper for lifelong learning LLM agents☆240Updated 4 months ago
- Building a comprehensive and handy list of papers for GUI agents☆529Updated last week
- papers related to LLM-agent that published on top conferences☆320Updated 6 months ago
- GitHub page for "Large Language Model-Brained GUI Agents: A Survey"☆201Updated 4 months ago
- ☆415Updated 2 weeks ago
- Efficient Multimodal Large Language Models: A Survey☆373Updated 5 months ago
- [ACL 2024] A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future☆465Updated 9 months ago
- ✨✨Latest Papers and Datasets on Mobile and PC GUI Agent☆138Updated 10 months ago
- Explore the Multimodal “Aha Moment” on 2B Model☆613Updated 7 months ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆305Updated last week
- Research Trends in LLM-guided Multimodal Learning.☆355Updated 2 years ago
- The model, data and code for the visual GUI Agent SeeClick☆433Updated 3 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆539Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆356Updated last year
- [CSUR 2025] Continual Learning of Large Language Models: A Comprehensive Survey☆470Updated 5 months ago
- Paper collections of multi-modal LLM for Math/STEM/Code.☆128Updated 2 months ago
- This is the official repository for Retrieval Augmented Visual Question Answering☆238Updated 10 months ago
- ☆365Updated last week
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆880Updated last month
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆260Updated 5 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆294Updated last year