jun0wanan / awesome-large-multimodal-agents
☆427Updated 7 months ago
Alternatives and similar repositories for awesome-large-multimodal-agents
Users that are interested in awesome-large-multimodal-agents are comparing it to the libraries listed below
Sorting:
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.☆621Updated this week
- ✨✨Latest Papers and Datasets on Mobile and PC GUI Agent☆124Updated 5 months ago
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey☆568Updated last week
- Code and implementations for the paper "AgentGym: Evolving Large Language Model-based Agents across Diverse Environments" by Zhiheng Xi e…☆462Updated 2 months ago
- papers related to LLM-agent that published on top conferences☆315Updated last month
- This is the repository for the Tool Learning survey.☆369Updated 2 months ago
- Latest Advances on Long Chain-of-Thought Reasoning☆298Updated last month
- The model, data and code for the visual GUI Agent SeeClick☆368Updated 5 months ago
- GitHub page for "Large Language Model-Brained GUI Agents: A Survey"☆155Updated 2 weeks ago
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆424Updated last month
- Building a comprehensive and handy list of papers for GUI agents☆332Updated this week
- Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Mod…☆323Updated last month
- Collect every awesome work about r1!☆363Updated 2 weeks ago
- ☆527Updated 4 months ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆281Updated 6 months ago
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆681Updated last month
- ☆181Updated last month
- ✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models☆575Updated 3 weeks ago
- Awesome RL-based LLM Reasoning☆489Updated last week
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆363Updated this week
- Towards Large Multimodal Models as Visual Foundation Agents☆210Updated 3 weeks ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆349Updated last year
- Efficient Multimodal Large Language Models: A Survey☆343Updated 2 weeks ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆312Updated last year
- Scaling Deep Research via Reinforcement Learning in Real-world Environments.☆363Updated last month
- ☆229Updated last year
- Agent-R1: Training Powerful LLM Agents with End-to-End Reinforcement Learning☆475Updated this week
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆314Updated 11 months ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆294Updated 2 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆278Updated 8 months ago