jun0wanan / awesome-large-multimodal-agentsLinks
☆480Updated last year
Alternatives and similar repositories for awesome-large-multimodal-agents
Users that are interested in awesome-large-multimodal-agents are comparing it to the libraries listed below
Sorting:
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.☆753Updated last week
- Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Mod…☆357Updated 9 months ago
- ✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models☆636Updated 7 months ago
- TPAMI 2026 | This repository collects awesome survey, resource, and paper for lifelong learning LLM agents☆261Updated this week
- Latest Advances on Long Chain-of-Thought Reasoning☆596Updated 5 months ago
- ☆487Updated 3 months ago
- ☆458Updated 5 months ago
- Collect every awesome work about r1!☆426Updated 8 months ago
- papers related to LLM-agent that published on top conferences☆320Updated 9 months ago
- Building a comprehensive and handy list of papers for GUI agents☆602Updated 2 months ago
- This is the repository for the Tool Learning survey.☆474Updated 5 months ago
- Efficient Multimodal Large Language Models: A Survey☆383Updated 8 months ago
- Towards Large Multimodal Models as Visual Foundation Agents☆248Updated 8 months ago
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey☆933Updated 2 months ago
- ✨✨Latest Papers and Datasets on Mobile and PC GUI Agent☆143Updated last year
- [ACL 2024] A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future☆485Updated last year
- GitHub page for "Large Language Model-Brained GUI Agents: A Survey"☆215Updated 6 months ago
- The model, data and code for the visual GUI Agent SeeClick☆456Updated 6 months ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆317Updated last year
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆280Updated 7 months ago
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆954Updated 3 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆555Updated last year
- Paper list for Personal LLM Agents☆424Updated last year
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆320Updated 3 months ago
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆536Updated 7 months ago
- ☆452Updated 11 months ago
- Explore the Multimodal “Aha Moment” on 2B Model☆621Updated 9 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆358Updated 2 years ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆359Updated last month
- Research Trends in LLM-guided Multimodal Learning.☆357Updated 2 years ago