Yangyi-Chen / Multimodal-AND-Large-Language-ModelsLinks
Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.
β754Updated 2 weeks ago
Alternatives and similar repositories for Multimodal-AND-Large-Language-Models
Users that are interested in Multimodal-AND-Large-Language-Models are comparing it to the libraries listed below
Sorting:
- π A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).β975Updated 4 months ago
- β484Updated last year
- Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Modβ¦β358Updated 10 months ago
- β¨β¨Latest Papers and Benchmarks in Reasoning with Foundation Modelsβ644Updated 7 months ago
- [CSUR 2025] Continual Learning of Large Language Models: A Comprehensive Surveyβ511Updated last month
- A curated list of awesome Multimodal studies.β312Updated last month
- Latest Advances on Long Chain-of-Thought Reasoningβ605Updated 6 months ago
- Resources and paper list for "Thinking with Images for LVLMs". This repository accompanies our survey on how LVLMs can leverage visual inβ¦β1,313Updated this week
- [ACL 2024] A Survey of Chain of Thought Reasoning: Advances, Frontiers and Futureβ489Updated last year
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Surveyβ950Updated 2 months ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,β¦β363Updated last month
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-basβ¦β1,343Updated last month
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resourcesβ262Updated 4 months ago
- π A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, Agent, and Beyondβ342Updated 2 weeks ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. ACM Computing Surveys, 2025.β659Updated this week
- Research Trends in LLM-guided Multimodal Learning.β357Updated 2 years ago
- LLM hallucination paper listβ331Updated last year
- Efficient Multimodal Large Language Models: A Surveyβ387Updated 9 months ago
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Eβ¦β544Updated 8 months ago
- [TMLR 2025] Stop Overthinking: A Survey on Efficient Reasoning for Large Language Modelsβ731Updated 3 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imagβ¦β555Updated last year
- Collect every awesome work about r1!β428Updated 9 months ago
- β642Updated 6 months ago
- Paper List for In-context Learning π·β874Updated last year
- Reading list of hallucination in LLMs. Check out our new survey paper: "Sirenβs Song in the AI Ocean: A Survey on Hallucination in Large β¦β1,076Updated 4 months ago
- Latest Advances on System-2 Reasoningβ1,320Updated 7 months ago
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Alloβ¦β397Updated last year
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.β839Updated 8 months ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(β¦β325Updated 3 months ago
- Papers and Datasets on Instruction Tuning and Following. β¨β¨β¨β506Updated last year