patrick-tssn / Awesome-Colorful-LLMLinks
Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics, Fundamental Sciences such as Mathematics, and Ominous.
☆123Updated last month
Alternatives and similar repositories for Awesome-Colorful-LLM
Users that are interested in Awesome-Colorful-LLM are comparing it to the libraries listed below
Sorting:
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆105Updated last year
- ☆65Updated last year
- ☆73Updated last year
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated last year
- [ICCV 2025] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆164Updated 3 months ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆218Updated 3 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆105Updated last month
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆283Updated 10 months ago
- ☆100Updated last year
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆117Updated 7 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆162Updated 3 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆154Updated 9 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆266Updated last year
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search too…☆241Updated last week
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆86Updated last year
- ☆50Updated last year
- ACL 2025: Synthetic data generation pipelines for text-rich images.☆87Updated 4 months ago
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆108Updated last month
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆352Updated last year
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆50Updated 8 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆50Updated 7 months ago
- Official repository of MMDU dataset☆92Updated 9 months ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆120Updated 2 months ago
- (ACL 2025) MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale☆46Updated last month
- A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools☆67Updated last year
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 5 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆79Updated 5 months ago
- A RLHF Infrastructure for Vision-Language Models☆179Updated 7 months ago
- Official github repo of G-LLaVA☆145Updated 4 months ago