jun0wanan / awesome-large-multimodal-agentsLinks
☆479Updated last year
Alternatives and similar repositories for awesome-large-multimodal-agents
Users that are interested in awesome-large-multimodal-agents are comparing it to the libraries listed below
Sorting:
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.☆749Updated this week
- Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Mod…☆353Updated 9 months ago
- ✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models☆636Updated 6 months ago
- ✨✨Latest Papers and Datasets on Mobile and PC GUI Agent☆141Updated last year
- Towards Large Multimodal Models as Visual Foundation Agents☆248Updated 8 months ago
- Latest Advances on Long Chain-of-Thought Reasoning☆580Updated 5 months ago
- papers related to LLM-agent that published on top conferences☆320Updated 8 months ago
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆532Updated 7 months ago
- This repository collects awesome survey, resource, and paper for lifelong learning LLM agents☆257Updated 6 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆358Updated 2 years ago
- The model, data and code for the visual GUI Agent SeeClick☆449Updated 5 months ago
- Building a comprehensive and handy list of papers for GUI agents☆587Updated 2 months ago
- GitHub page for "Large Language Model-Brained GUI Agents: A Survey"☆216Updated 6 months ago
- Collect every awesome work about r1!☆427Updated 7 months ago
- This is the repository for the Tool Learning survey.☆467Updated 4 months ago
- ☆452Updated 5 months ago
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆277Updated 7 months ago
- Efficient Multimodal Large Language Models: A Survey☆380Updated 7 months ago
- [ACL 2024] A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future☆481Updated 11 months ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆321Updated 2 months ago
- ☆454Updated 2 months ago
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆934Updated 3 months ago
- Research Trends in LLM-guided Multimodal Learning.☆357Updated 2 years ago
- Explore the Multimodal “Aha Moment” on 2B Model☆620Updated 9 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆299Updated last year
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey☆921Updated last month
- A collection of recent papers on building autonomous agent. Two topics included: RL-based / LLM-based agents.☆737Updated last year
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆360Updated 2 weeks ago
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆834Updated 7 months ago
- Paper collections of multi-modal LLM for Math/STEM/Code.☆132Updated last month