jun0wanan / awesome-large-multimodal-agentsLinks
☆477Updated last year
Alternatives and similar repositories for awesome-large-multimodal-agents
Users that are interested in awesome-large-multimodal-agents are comparing it to the libraries listed below
Sorting:
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.☆747Updated last month
- Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Mod…☆350Updated 8 months ago
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆527Updated 6 months ago
- ✨ ✨Latest Papers and Benchmarks in Reasoning with Foundation Models☆632Updated 5 months ago
- Building a comprehensive and handy list of papers for GUI agents☆563Updated last month
- ✨✨Latest Papers and Datasets on Mobile and PC GUI Agent☆140Updated last year
- Towards Large Multimodal Models as Visual Foundation Agents☆245Updated 7 months ago
- Latest Advances on Long Chain-of-Thought Reasoning☆564Updated 4 months ago
- GitHub page for "Large Language Model-Brained GUI Agents: A Survey"☆212Updated 5 months ago
- ☆435Updated 4 months ago
- The model, data and code for the visual GUI Agent SeeClick☆444Updated 4 months ago
- This repository collects awesome survey, resource, and paper for lifelong learning LLM agents☆252Updated 6 months ago
- Collect every awesome work about r1!☆423Updated 7 months ago
- papers related to LLM-agent that published on top conferences☆320Updated 7 months ago
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆272Updated 6 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆358Updated last year
- Explore the Multimodal “Aha Moment” on 2B Model☆619Updated 8 months ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆318Updated last year
- [ACL 2024] A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future☆474Updated 10 months ago
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆916Updated 2 months ago
- Efficient Multimodal Large Language Models: A Survey☆376Updated 7 months ago
- This is the repository for the Tool Learning survey.☆459Updated 3 months ago
- Official implementation for "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Findings of ACL 2024)☆255Updated last year
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆316Updated last month
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆298Updated last year
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆549Updated last year
- This is the official repository for Retrieval Augmented Visual Question Answering☆242Updated 11 months ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆357Updated 2 weeks ago
- R1-onevision, a visual language model capable of deep CoT reasoning.☆572Updated 7 months ago
- ☆438Updated last month